2025-09-17 00:00:08.033882 | Job console starting 2025-09-17 00:00:08.054743 | Updating git repos 2025-09-17 00:00:08.142222 | Cloning repos into workspace 2025-09-17 00:00:08.334174 | Restoring repo states 2025-09-17 00:00:08.351208 | Merging changes 2025-09-17 00:00:08.351224 | Checking out repos 2025-09-17 00:00:08.751623 | Preparing playbooks 2025-09-17 00:00:09.384887 | Running Ansible setup 2025-09-17 00:00:13.750710 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-17 00:00:15.159457 | 2025-09-17 00:00:15.159598 | PLAY [Base pre] 2025-09-17 00:00:15.187386 | 2025-09-17 00:00:15.187508 | TASK [Setup log path fact] 2025-09-17 00:00:15.220846 | orchestrator | ok 2025-09-17 00:00:15.355328 | 2025-09-17 00:00:15.355473 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-17 00:00:15.414986 | orchestrator | ok 2025-09-17 00:00:15.454554 | 2025-09-17 00:00:15.454667 | TASK [emit-job-header : Print job information] 2025-09-17 00:00:15.604756 | # Job Information 2025-09-17 00:00:15.604967 | Ansible Version: 2.16.14 2025-09-17 00:00:15.605007 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-17 00:00:15.605043 | Pipeline: periodic-midnight 2025-09-17 00:00:15.605067 | Executor: 521e9411259a 2025-09-17 00:00:15.605089 | Triggered by: https://github.com/osism/testbed 2025-09-17 00:00:15.605111 | Event ID: f3587aab02a648b691f580c095977ddd 2025-09-17 00:00:15.631294 | 2025-09-17 00:00:15.631415 | LOOP [emit-job-header : Print node information] 2025-09-17 00:00:15.970245 | orchestrator | ok: 2025-09-17 00:00:15.970448 | orchestrator | # Node Information 2025-09-17 00:00:15.970484 | orchestrator | Inventory Hostname: orchestrator 2025-09-17 00:00:15.970505 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-17 00:00:15.970543 | orchestrator | Username: zuul-testbed05 2025-09-17 00:00:15.970563 | orchestrator | Distro: Debian 12.12 2025-09-17 00:00:15.970583 | orchestrator | Provider: static-testbed 2025-09-17 00:00:15.970616 | orchestrator | Region: 2025-09-17 00:00:15.970670 | orchestrator | Label: testbed-orchestrator 2025-09-17 00:00:15.970692 | orchestrator | Product Name: OpenStack Nova 2025-09-17 00:00:15.970709 | orchestrator | Interface IP: 81.163.193.140 2025-09-17 00:00:15.997798 | 2025-09-17 00:00:15.997905 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-17 00:00:17.050052 | orchestrator -> localhost | changed 2025-09-17 00:00:17.057381 | 2025-09-17 00:00:17.057475 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-17 00:00:20.251013 | orchestrator -> localhost | changed 2025-09-17 00:00:20.262046 | 2025-09-17 00:00:20.262144 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-17 00:00:20.854970 | orchestrator -> localhost | ok 2025-09-17 00:00:20.860536 | 2025-09-17 00:00:20.860632 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-17 00:00:20.877811 | orchestrator | ok 2025-09-17 00:00:20.890717 | orchestrator | included: /var/lib/zuul/builds/dedceda2b2d442e38d351229c1f15473/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-17 00:00:20.897568 | 2025-09-17 00:00:20.897645 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-17 00:00:25.250616 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-17 00:00:25.250802 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/dedceda2b2d442e38d351229c1f15473/work/dedceda2b2d442e38d351229c1f15473_id_rsa 2025-09-17 00:00:25.251090 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/dedceda2b2d442e38d351229c1f15473/work/dedceda2b2d442e38d351229c1f15473_id_rsa.pub 2025-09-17 00:00:25.251120 | orchestrator -> localhost | The key fingerprint is: 2025-09-17 00:00:25.251141 | orchestrator -> localhost | SHA256:edWlhjPLhjb5CJ8RqGIMJIL6aXJjB0+DX+pm26JFIE8 zuul-build-sshkey 2025-09-17 00:00:25.251160 | orchestrator -> localhost | The key's randomart image is: 2025-09-17 00:00:25.251185 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-17 00:00:25.251205 | orchestrator -> localhost | |+ . .| 2025-09-17 00:00:25.251223 | orchestrator -> localhost | |oo . o o | 2025-09-17 00:00:25.251239 | orchestrator -> localhost | |o E. . . = + | 2025-09-17 00:00:25.251256 | orchestrator -> localhost | |.+o+o .. . * = | 2025-09-17 00:00:25.251272 | orchestrator -> localhost | | ..*=+. S B + | 2025-09-17 00:00:25.251293 | orchestrator -> localhost | |. Bo=. = B | 2025-09-17 00:00:25.251310 | orchestrator -> localhost | | = +. + . | 2025-09-17 00:00:25.251327 | orchestrator -> localhost | | .=. | 2025-09-17 00:00:25.251344 | orchestrator -> localhost | | .+.o. | 2025-09-17 00:00:25.251361 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-17 00:00:25.251406 | orchestrator -> localhost | ok: Runtime: 0:00:03.730843 2025-09-17 00:00:25.257374 | 2025-09-17 00:00:25.257450 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-17 00:00:25.297287 | orchestrator | ok 2025-09-17 00:00:25.312018 | orchestrator | included: /var/lib/zuul/builds/dedceda2b2d442e38d351229c1f15473/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-17 00:00:25.326927 | 2025-09-17 00:00:25.327014 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-17 00:00:25.349473 | orchestrator | skipping: Conditional result was False 2025-09-17 00:00:25.355826 | 2025-09-17 00:00:25.355907 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-17 00:00:26.081757 | orchestrator | changed 2025-09-17 00:00:26.086745 | 2025-09-17 00:00:26.086827 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-17 00:00:26.400260 | orchestrator | ok 2025-09-17 00:00:26.405221 | 2025-09-17 00:00:26.405295 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-17 00:00:26.856661 | orchestrator | ok 2025-09-17 00:00:26.867386 | 2025-09-17 00:00:26.867481 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-17 00:00:27.359351 | orchestrator | ok 2025-09-17 00:00:27.364229 | 2025-09-17 00:00:27.364314 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-17 00:00:27.399275 | orchestrator | skipping: Conditional result was False 2025-09-17 00:00:27.407041 | 2025-09-17 00:00:27.407138 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-17 00:00:28.133209 | orchestrator -> localhost | changed 2025-09-17 00:00:28.153568 | 2025-09-17 00:00:28.153668 | TASK [add-build-sshkey : Add back temp key] 2025-09-17 00:00:28.820447 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/dedceda2b2d442e38d351229c1f15473/work/dedceda2b2d442e38d351229c1f15473_id_rsa (zuul-build-sshkey) 2025-09-17 00:00:28.820660 | orchestrator -> localhost | ok: Runtime: 0:00:00.028275 2025-09-17 00:00:28.826308 | 2025-09-17 00:00:28.828647 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-17 00:00:29.397991 | orchestrator | ok 2025-09-17 00:00:29.405144 | 2025-09-17 00:00:29.405228 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-17 00:00:29.449231 | orchestrator | skipping: Conditional result was False 2025-09-17 00:00:29.536881 | 2025-09-17 00:00:29.536979 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-17 00:00:29.941573 | orchestrator | ok 2025-09-17 00:00:29.959888 | 2025-09-17 00:00:29.959986 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-17 00:00:30.009800 | orchestrator | ok 2025-09-17 00:00:30.018962 | 2025-09-17 00:00:30.019054 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-17 00:00:30.856803 | orchestrator -> localhost | ok 2025-09-17 00:00:30.862949 | 2025-09-17 00:00:30.863033 | TASK [validate-host : Collect information about the host] 2025-09-17 00:00:32.615548 | orchestrator | ok 2025-09-17 00:00:32.642666 | 2025-09-17 00:00:32.642781 | TASK [validate-host : Sanitize hostname] 2025-09-17 00:00:32.765338 | orchestrator | ok 2025-09-17 00:00:32.769678 | 2025-09-17 00:00:32.769759 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-17 00:00:34.033323 | orchestrator -> localhost | changed 2025-09-17 00:00:34.039386 | 2025-09-17 00:00:34.039478 | TASK [validate-host : Collect information about zuul worker] 2025-09-17 00:00:34.517688 | orchestrator | ok 2025-09-17 00:00:34.527224 | 2025-09-17 00:00:34.527326 | TASK [validate-host : Write out all zuul information for each host] 2025-09-17 00:00:36.013468 | orchestrator -> localhost | changed 2025-09-17 00:00:36.028866 | 2025-09-17 00:00:36.028970 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-17 00:00:36.319214 | orchestrator | ok 2025-09-17 00:00:36.325777 | 2025-09-17 00:00:36.325878 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-17 00:01:16.176592 | orchestrator | changed: 2025-09-17 00:01:16.176818 | orchestrator | .d..t...... src/ 2025-09-17 00:01:16.176852 | orchestrator | .d..t...... src/github.com/ 2025-09-17 00:01:16.176876 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-17 00:01:16.176898 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-17 00:01:16.176919 | orchestrator | RedHat.yml 2025-09-17 00:01:16.191217 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-17 00:01:16.191235 | orchestrator | RedHat.yml 2025-09-17 00:01:16.191287 | orchestrator | = 1.53.0"... 2025-09-17 00:01:27.805154 | orchestrator | 00:01:27.804 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-17 00:01:27.969765 | orchestrator | 00:01:27.969 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-17 00:01:28.449472 | orchestrator | 00:01:28.449 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-17 00:01:28.831662 | orchestrator | 00:01:28.831 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-17 00:01:29.630137 | orchestrator | 00:01:29.629 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-17 00:01:29.707551 | orchestrator | 00:01:29.704 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-17 00:01:30.214167 | orchestrator | 00:01:30.212 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-17 00:01:30.214342 | orchestrator | 00:01:30.212 STDOUT terraform: Providers are signed by their developers. 2025-09-17 00:01:30.214351 | orchestrator | 00:01:30.212 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-17 00:01:30.214355 | orchestrator | 00:01:30.212 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-17 00:01:30.214360 | orchestrator | 00:01:30.212 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-17 00:01:30.214370 | orchestrator | 00:01:30.212 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-17 00:01:30.214377 | orchestrator | 00:01:30.212 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-17 00:01:30.214381 | orchestrator | 00:01:30.213 STDOUT terraform: you run "tofu init" in the future. 2025-09-17 00:01:30.215235 | orchestrator | 00:01:30.215 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-17 00:01:30.215786 | orchestrator | 00:01:30.215 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-17 00:01:30.216612 | orchestrator | 00:01:30.215 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-17 00:01:30.216846 | orchestrator | 00:01:30.216 STDOUT terraform: should now work. 2025-09-17 00:01:30.217375 | orchestrator | 00:01:30.216 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-17 00:01:30.217711 | orchestrator | 00:01:30.217 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-17 00:01:30.218540 | orchestrator | 00:01:30.217 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-17 00:01:30.377033 | orchestrator | 00:01:30.376 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-17 00:01:30.377169 | orchestrator | 00:01:30.376 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-17 00:01:30.608420 | orchestrator | 00:01:30.608 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-17 00:01:30.608494 | orchestrator | 00:01:30.608 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-17 00:01:30.608505 | orchestrator | 00:01:30.608 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-17 00:01:30.608513 | orchestrator | 00:01:30.608 STDOUT terraform: for this configuration. 2025-09-17 00:01:30.787087 | orchestrator | 00:01:30.786 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-17 00:01:30.787172 | orchestrator | 00:01:30.786 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-17 00:01:30.903250 | orchestrator | 00:01:30.903 STDOUT terraform: ci.auto.tfvars 2025-09-17 00:01:30.906776 | orchestrator | 00:01:30.906 STDOUT terraform: default_custom.tf 2025-09-17 00:01:31.054103 | orchestrator | 00:01:31.053 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-17 00:01:31.967071 | orchestrator | 00:01:31.966 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-17 00:01:32.490044 | orchestrator | 00:01:32.489 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-17 00:01:32.802377 | orchestrator | 00:01:32.801 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-17 00:01:32.803918 | orchestrator | 00:01:32.802 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-17 00:01:32.803971 | orchestrator | 00:01:32.802 STDOUT terraform:  + create 2025-09-17 00:01:32.803987 | orchestrator | 00:01:32.802 STDOUT terraform:  <= read (data resources) 2025-09-17 00:01:32.804000 | orchestrator | 00:01:32.802 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-17 00:01:32.813873 | orchestrator | 00:01:32.813 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-17 00:01:32.813934 | orchestrator | 00:01:32.813 STDOUT terraform:  # (config refers to values not yet known) 2025-09-17 00:01:32.813943 | orchestrator | 00:01:32.813 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-17 00:01:32.813949 | orchestrator | 00:01:32.813 STDOUT terraform:  + checksum = (known after apply) 2025-09-17 00:01:32.813955 | orchestrator | 00:01:32.813 STDOUT terraform:  + created_at = (known after apply) 2025-09-17 00:01:32.813979 | orchestrator | 00:01:32.813 STDOUT terraform:  + file = (known after apply) 2025-09-17 00:01:32.814043 | orchestrator | 00:01:32.813 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.814052 | orchestrator | 00:01:32.814 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.814077 | orchestrator | 00:01:32.814 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-17 00:01:32.814115 | orchestrator | 00:01:32.814 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-17 00:01:32.814137 | orchestrator | 00:01:32.814 STDOUT terraform:  + most_recent = true 2025-09-17 00:01:32.822162 | orchestrator | 00:01:32.822 STDOUT terraform:  + name = (known after apply) 2025-09-17 00:01:32.822207 | orchestrator | 00:01:32.822 STDOUT terraform:  + protected = (known after apply) 2025-09-17 00:01:32.822212 | orchestrator | 00:01:32.822 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.822269 | orchestrator | 00:01:32.822 STDOUT terraform:  + schema = (known after apply) 2025-09-17 00:01:32.822279 | orchestrator | 00:01:32.822 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-17 00:01:32.822313 | orchestrator | 00:01:32.822 STDOUT terraform:  + tags = (known after apply) 2025-09-17 00:01:32.822342 | orchestrator | 00:01:32.822 STDOUT terraform:  + updated_at = (known after apply) 2025-09-17 00:01:32.822352 | orchestrator | 00:01:32.822 STDOUT terraform:  } 2025-09-17 00:01:32.822494 | orchestrator | 00:01:32.822 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-17 00:01:32.822504 | orchestrator | 00:01:32.822 STDOUT terraform:  # (config refers to values not yet known) 2025-09-17 00:01:32.822548 | orchestrator | 00:01:32.822 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-17 00:01:32.822579 | orchestrator | 00:01:32.822 STDOUT terraform:  + checksum = (known after apply) 2025-09-17 00:01:32.822609 | orchestrator | 00:01:32.822 STDOUT terraform:  + created_at = (known after apply) 2025-09-17 00:01:32.822645 | orchestrator | 00:01:32.822 STDOUT terraform:  + file = (known after apply) 2025-09-17 00:01:32.822678 | orchestrator | 00:01:32.822 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.822703 | orchestrator | 00:01:32.822 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.822743 | orchestrator | 00:01:32.822 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-17 00:01:32.822750 | orchestrator | 00:01:32.822 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-17 00:01:32.822796 | orchestrator | 00:01:32.822 STDOUT terraform:  + most_recent = true 2025-09-17 00:01:32.822804 | orchestrator | 00:01:32.822 STDOUT terraform:  + name = (known after apply) 2025-09-17 00:01:32.822838 | orchestrator | 00:01:32.822 STDOUT terraform:  + protected = (known after apply) 2025-09-17 00:01:32.822849 | orchestrator | 00:01:32.822 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.822895 | orchestrator | 00:01:32.822 STDOUT terraform:  + schema = (known after apply) 2025-09-17 00:01:32.822903 | orchestrator | 00:01:32.822 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-17 00:01:32.822945 | orchestrator | 00:01:32.822 STDOUT terraform:  + tags = (known after apply) 2025-09-17 00:01:32.822975 | orchestrator | 00:01:32.822 STDOUT terraform:  + updated_at = (known after apply) 2025-09-17 00:01:32.822982 | orchestrator | 00:01:32.822 STDOUT terraform:  } 2025-09-17 00:01:32.823017 | orchestrator | 00:01:32.822 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-17 00:01:32.823034 | orchestrator | 00:01:32.823 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-17 00:01:32.823071 | orchestrator | 00:01:32.823 STDOUT terraform:  + content = (known after apply) 2025-09-17 00:01:32.823109 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-17 00:01:32.823147 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-17 00:01:32.823193 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-17 00:01:32.823229 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-17 00:01:32.823265 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-17 00:01:32.823302 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-17 00:01:32.823327 | orchestrator | 00:01:32.823 STDOUT terraform:  + directory_permission = "0777" 2025-09-17 00:01:32.823351 | orchestrator | 00:01:32.823 STDOUT terraform:  + file_permission = "0644" 2025-09-17 00:01:32.823387 | orchestrator | 00:01:32.823 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-17 00:01:32.823427 | orchestrator | 00:01:32.823 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.823435 | orchestrator | 00:01:32.823 STDOUT terraform:  } 2025-09-17 00:01:32.823464 | orchestrator | 00:01:32.823 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-17 00:01:32.823490 | orchestrator | 00:01:32.823 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-17 00:01:32.823526 | orchestrator | 00:01:32.823 STDOUT terraform:  + content = (known after apply) 2025-09-17 00:01:32.823561 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-17 00:01:32.823597 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-17 00:01:32.823632 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-17 00:01:32.823669 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-17 00:01:32.823705 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-17 00:01:32.823741 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-17 00:01:32.823765 | orchestrator | 00:01:32.823 STDOUT terraform:  + directory_permission = "0777" 2025-09-17 00:01:32.823791 | orchestrator | 00:01:32.823 STDOUT terraform:  + file_permission = "0644" 2025-09-17 00:01:32.823823 | orchestrator | 00:01:32.823 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-17 00:01:32.824029 | orchestrator | 00:01:32.823 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.824037 | orchestrator | 00:01:32.823 STDOUT terraform:  } 2025-09-17 00:01:32.824046 | orchestrator | 00:01:32.823 STDOUT terraform:  # local_file.inventory will be created 2025-09-17 00:01:32.824050 | orchestrator | 00:01:32.823 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-17 00:01:32.824054 | orchestrator | 00:01:32.823 STDOUT terraform:  + content = (known after apply) 2025-09-17 00:01:32.824065 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-17 00:01:32.824070 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-17 00:01:32.824073 | orchestrator | 00:01:32.823 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-17 00:01:32.824080 | orchestrator | 00:01:32.824 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-17 00:01:32.824084 | orchestrator | 00:01:32.824 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-17 00:01:32.824124 | orchestrator | 00:01:32.824 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-17 00:01:32.824149 | orchestrator | 00:01:32.824 STDOUT terraform:  + directory_permission = "0777" 2025-09-17 00:01:32.824171 | orchestrator | 00:01:32.824 STDOUT terraform:  + file_permission = "0644" 2025-09-17 00:01:32.824313 | orchestrator | 00:01:32.824 STDOUT terraform:  + filename = "inventory.ci" 2025-09-17 00:01:32.824362 | orchestrator | 00:01:32.824 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.824372 | orchestrator | 00:01:32.824 STDOUT terraform:  } 2025-09-17 00:01:32.824387 | orchestrator | 00:01:32.824 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-17 00:01:32.824395 | orchestrator | 00:01:32.824 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-17 00:01:32.824404 | orchestrator | 00:01:32.824 STDOUT terraform:  + content = (sensitive value) 2025-09-17 00:01:32.824411 | orchestrator | 00:01:32.824 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-17 00:01:32.824423 | orchestrator | 00:01:32.824 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-17 00:01:32.824437 | orchestrator | 00:01:32.824 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-17 00:01:32.824481 | orchestrator | 00:01:32.824 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-17 00:01:32.824509 | orchestrator | 00:01:32.824 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-17 00:01:32.824551 | orchestrator | 00:01:32.824 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-17 00:01:32.824567 | orchestrator | 00:01:32.824 STDOUT terraform:  + directory_permission = "0700" 2025-09-17 00:01:32.824580 | orchestrator | 00:01:32.824 STDOUT terraform:  + file_permission = "0600" 2025-09-17 00:01:32.824623 | orchestrator | 00:01:32.824 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-17 00:01:32.824652 | orchestrator | 00:01:32.824 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.824667 | orchestrator | 00:01:32.824 STDOUT terraform:  } 2025-09-17 00:01:32.824706 | orchestrator | 00:01:32.824 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-17 00:01:32.824721 | orchestrator | 00:01:32.824 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-17 00:01:32.824765 | orchestrator | 00:01:32.824 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.824778 | orchestrator | 00:01:32.824 STDOUT terraform:  } 2025-09-17 00:01:32.824827 | orchestrator | 00:01:32.824 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-17 00:01:32.824880 | orchestrator | 00:01:32.824 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-17 00:01:32.824994 | orchestrator | 00:01:32.824 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.825009 | orchestrator | 00:01:32.824 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.825019 | orchestrator | 00:01:32.824 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.825034 | orchestrator | 00:01:32.824 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.825047 | orchestrator | 00:01:32.825 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.825118 | orchestrator | 00:01:32.825 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-17 00:01:32.825134 | orchestrator | 00:01:32.825 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.825171 | orchestrator | 00:01:32.825 STDOUT terraform:  + size = 80 2025-09-17 00:01:32.825208 | orchestrator | 00:01:32.825 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.825265 | orchestrator | 00:01:32.825 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.825278 | orchestrator | 00:01:32.825 STDOUT terraform:  } 2025-09-17 00:01:32.825327 | orchestrator | 00:01:32.825 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-17 00:01:32.825367 | orchestrator | 00:01:32.825 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 00:01:32.825397 | orchestrator | 00:01:32.825 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.825411 | orchestrator | 00:01:32.825 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.825466 | orchestrator | 00:01:32.825 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.825481 | orchestrator | 00:01:32.825 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.825538 | orchestrator | 00:01:32.825 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.825578 | orchestrator | 00:01:32.825 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-17 00:01:32.825594 | orchestrator | 00:01:32.825 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.825643 | orchestrator | 00:01:32.825 STDOUT terraform:  + size = 80 2025-09-17 00:01:32.825656 | orchestrator | 00:01:32.825 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.825669 | orchestrator | 00:01:32.825 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.825679 | orchestrator | 00:01:32.825 STDOUT terraform:  } 2025-09-17 00:01:32.825731 | orchestrator | 00:01:32.825 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-17 00:01:32.825771 | orchestrator | 00:01:32.825 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 00:01:32.825786 | orchestrator | 00:01:32.825 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.825808 | orchestrator | 00:01:32.825 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.825862 | orchestrator | 00:01:32.825 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.825878 | orchestrator | 00:01:32.825 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.825931 | orchestrator | 00:01:32.825 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.825972 | orchestrator | 00:01:32.825 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-17 00:01:32.825988 | orchestrator | 00:01:32.825 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.826001 | orchestrator | 00:01:32.825 STDOUT terraform:  + size = 80 2025-09-17 00:01:32.826146 | orchestrator | 00:01:32.825 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.826233 | orchestrator | 00:01:32.826 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.826448 | orchestrator | 00:01:32.826 STDOUT terraform:  } 2025-09-17 00:01:32.827015 | orchestrator | 00:01:32.826 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-17 00:01:32.827383 | orchestrator | 00:01:32.826 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 00:01:32.827531 | orchestrator | 00:01:32.827 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.827552 | orchestrator | 00:01:32.827 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.827563 | orchestrator | 00:01:32.827 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.827622 | orchestrator | 00:01:32.827 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.827633 | orchestrator | 00:01:32.827 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.827674 | orchestrator | 00:01:32.827 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-17 00:01:32.827701 | orchestrator | 00:01:32.827 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.827712 | orchestrator | 00:01:32.827 STDOUT terraform:  + size = 80 2025-09-17 00:01:32.827748 | orchestrator | 00:01:32.827 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.827763 | orchestrator | 00:01:32.827 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.827773 | orchestrator | 00:01:32.827 STDOUT terraform:  } 2025-09-17 00:01:32.827834 | orchestrator | 00:01:32.827 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-17 00:01:32.827873 | orchestrator | 00:01:32.827 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 00:01:32.827928 | orchestrator | 00:01:32.827 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.827940 | orchestrator | 00:01:32.827 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.827951 | orchestrator | 00:01:32.827 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.828020 | orchestrator | 00:01:32.827 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.828031 | orchestrator | 00:01:32.827 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.828089 | orchestrator | 00:01:32.828 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-17 00:01:32.828103 | orchestrator | 00:01:32.828 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.828113 | orchestrator | 00:01:32.828 STDOUT terraform:  + size = 80 2025-09-17 00:01:32.828152 | orchestrator | 00:01:32.828 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.828167 | orchestrator | 00:01:32.828 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.828179 | orchestrator | 00:01:32.828 STDOUT terraform:  } 2025-09-17 00:01:32.828243 | orchestrator | 00:01:32.828 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-17 00:01:32.828291 | orchestrator | 00:01:32.828 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 00:01:32.828319 | orchestrator | 00:01:32.828 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.828330 | orchestrator | 00:01:32.828 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.828376 | orchestrator | 00:01:32.828 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.828407 | orchestrator | 00:01:32.828 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.828434 | orchestrator | 00:01:32.828 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.828488 | orchestrator | 00:01:32.828 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-17 00:01:32.828537 | orchestrator | 00:01:32.828 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.828550 | orchestrator | 00:01:32.828 STDOUT terraform:  + size = 80 2025-09-17 00:01:32.828561 | orchestrator | 00:01:32.828 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.828599 | orchestrator | 00:01:32.828 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.828613 | orchestrator | 00:01:32.828 STDOUT terraform:  } 2025-09-17 00:01:32.828682 | orchestrator | 00:01:32.828 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-17 00:01:32.828696 | orchestrator | 00:01:32.828 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 00:01:32.828731 | orchestrator | 00:01:32.828 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.828758 | orchestrator | 00:01:32.828 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.828802 | orchestrator | 00:01:32.828 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.828816 | orchestrator | 00:01:32.828 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.828881 | orchestrator | 00:01:32.828 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.828895 | orchestrator | 00:01:32.828 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-17 00:01:32.828934 | orchestrator | 00:01:32.828 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.828962 | orchestrator | 00:01:32.828 STDOUT terraform:  + size = 80 2025-09-17 00:01:32.828989 | orchestrator | 00:01:32.828 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.829001 | orchestrator | 00:01:32.828 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.829009 | orchestrator | 00:01:32.828 STDOUT terraform:  } 2025-09-17 00:01:32.829046 | orchestrator | 00:01:32.828 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-17 00:01:32.829086 | orchestrator | 00:01:32.829 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 00:01:32.829118 | orchestrator | 00:01:32.829 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.829130 | orchestrator | 00:01:32.829 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.829176 | orchestrator | 00:01:32.829 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.829246 | orchestrator | 00:01:32.829 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.829259 | orchestrator | 00:01:32.829 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-17 00:01:32.829302 | orchestrator | 00:01:32.829 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.829316 | orchestrator | 00:01:32.829 STDOUT terraform:  + size = 20 2025-09-17 00:01:32.829360 | orchestrator | 00:01:32.829 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.829370 | orchestrator | 00:01:32.829 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.829381 | orchestrator | 00:01:32.829 STDOUT terraform:  } 2025-09-17 00:01:32.829418 | orchestrator | 00:01:32.829 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-17 00:01:32.829478 | orchestrator | 00:01:32.829 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 00:01:32.829492 | orchestrator | 00:01:32.829 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.829529 | orchestrator | 00:01:32.829 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.829564 | orchestrator | 00:01:32.829 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.829591 | orchestrator | 00:01:32.829 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.829640 | orchestrator | 00:01:32.829 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-17 00:01:32.829652 | orchestrator | 00:01:32.829 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.829712 | orchestrator | 00:01:32.829 STDOUT terraform:  + size = 20 2025-09-17 00:01:32.829722 | orchestrator | 00:01:32.829 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.829732 | orchestrator | 00:01:32.829 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.829739 | orchestrator | 00:01:32.829 STDOUT terraform:  } 2025-09-17 00:01:32.829799 | orchestrator | 00:01:32.829 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-17 00:01:32.829811 | orchestrator | 00:01:32.829 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 00:01:32.829865 | orchestrator | 00:01:32.829 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.829883 | orchestrator | 00:01:32.829 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.829920 | orchestrator | 00:01:32.829 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.829951 | orchestrator | 00:01:32.829 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.829998 | orchestrator | 00:01:32.829 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-17 00:01:32.830010 | orchestrator | 00:01:32.829 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.830116 | orchestrator | 00:01:32.830 STDOUT terraform:  + size = 20 2025-09-17 00:01:32.830383 | orchestrator | 00:01:32.830 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.830517 | orchestrator | 00:01:32.830 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.830635 | orchestrator | 00:01:32.830 STDOUT terraform:  } 2025-09-17 00:01:32.831098 | orchestrator | 00:01:32.830 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-17 00:01:32.831652 | orchestrator | 00:01:32.831 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 00:01:32.832007 | orchestrator | 00:01:32.831 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.832278 | orchestrator | 00:01:32.832 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.832490 | orchestrator | 00:01:32.832 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.832783 | orchestrator | 00:01:32.832 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.833043 | orchestrator | 00:01:32.832 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-17 00:01:32.833239 | orchestrator | 00:01:32.833 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.833401 | orchestrator | 00:01:32.833 STDOUT terraform:  + size = 20 2025-09-17 00:01:32.833629 | orchestrator | 00:01:32.833 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.833643 | orchestrator | 00:01:32.833 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.833653 | orchestrator | 00:01:32.833 STDOUT terraform:  } 2025-09-17 00:01:32.833707 | orchestrator | 00:01:32.833 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-17 00:01:32.833752 | orchestrator | 00:01:32.833 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 00:01:32.833787 | orchestrator | 00:01:32.833 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.833800 | orchestrator | 00:01:32.833 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.833847 | orchestrator | 00:01:32.833 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.833883 | orchestrator | 00:01:32.833 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.833921 | orchestrator | 00:01:32.833 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-17 00:01:32.833957 | orchestrator | 00:01:32.833 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.833981 | orchestrator | 00:01:32.833 STDOUT terraform:  + size = 20 2025-09-17 00:01:32.833991 | orchestrator | 00:01:32.833 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.834052 | orchestrator | 00:01:32.833 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.834065 | orchestrator | 00:01:32.834 STDOUT terraform:  } 2025-09-17 00:01:32.834107 | orchestrator | 00:01:32.834 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-17 00:01:32.834152 | orchestrator | 00:01:32.834 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 00:01:32.834232 | orchestrator | 00:01:32.834 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.834242 | orchestrator | 00:01:32.834 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.834267 | orchestrator | 00:01:32.834 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.834306 | orchestrator | 00:01:32.834 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.834345 | orchestrator | 00:01:32.834 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-17 00:01:32.834382 | orchestrator | 00:01:32.834 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.834402 | orchestrator | 00:01:32.834 STDOUT terraform:  + size = 20 2025-09-17 00:01:32.834412 | orchestrator | 00:01:32.834 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.834445 | orchestrator | 00:01:32.834 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.834456 | orchestrator | 00:01:32.834 STDOUT terraform:  } 2025-09-17 00:01:32.834499 | orchestrator | 00:01:32.834 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-17 00:01:32.834543 | orchestrator | 00:01:32.834 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 00:01:32.834577 | orchestrator | 00:01:32.834 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.834603 | orchestrator | 00:01:32.834 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.834639 | orchestrator | 00:01:32.834 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.834680 | orchestrator | 00:01:32.834 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.834719 | orchestrator | 00:01:32.834 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-17 00:01:32.834755 | orchestrator | 00:01:32.834 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.834766 | orchestrator | 00:01:32.834 STDOUT terraform:  + size = 20 2025-09-17 00:01:32.834797 | orchestrator | 00:01:32.834 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.834808 | orchestrator | 00:01:32.834 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.834818 | orchestrator | 00:01:32.834 STDOUT terraform:  } 2025-09-17 00:01:32.834872 | orchestrator | 00:01:32.834 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-17 00:01:32.834915 | orchestrator | 00:01:32.834 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 00:01:32.834952 | orchestrator | 00:01:32.834 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.834965 | orchestrator | 00:01:32.834 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.835010 | orchestrator | 00:01:32.834 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.835047 | orchestrator | 00:01:32.835 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.835086 | orchestrator | 00:01:32.835 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-17 00:01:32.835123 | orchestrator | 00:01:32.835 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.835135 | orchestrator | 00:01:32.835 STDOUT terraform:  + size = 20 2025-09-17 00:01:32.835164 | orchestrator | 00:01:32.835 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.835206 | orchestrator | 00:01:32.835 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.835215 | orchestrator | 00:01:32.835 STDOUT terraform:  } 2025-09-17 00:01:32.835255 | orchestrator | 00:01:32.835 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-17 00:01:32.835299 | orchestrator | 00:01:32.835 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 00:01:32.835335 | orchestrator | 00:01:32.835 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 00:01:32.835347 | orchestrator | 00:01:32.835 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.835393 | orchestrator | 00:01:32.835 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.835430 | orchestrator | 00:01:32.835 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 00:01:32.835470 | orchestrator | 00:01:32.835 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-17 00:01:32.835506 | orchestrator | 00:01:32.835 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.835517 | orchestrator | 00:01:32.835 STDOUT terraform:  + size = 20 2025-09-17 00:01:32.835547 | orchestrator | 00:01:32.835 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 00:01:32.835576 | orchestrator | 00:01:32.835 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 00:01:32.835586 | orchestrator | 00:01:32.835 STDOUT terraform:  } 2025-09-17 00:01:32.835634 | orchestrator | 00:01:32.835 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-17 00:01:32.835672 | orchestrator | 00:01:32.835 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-17 00:01:32.835708 | orchestrator | 00:01:32.835 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 00:01:32.835742 | orchestrator | 00:01:32.835 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 00:01:32.835779 | orchestrator | 00:01:32.835 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 00:01:32.835814 | orchestrator | 00:01:32.835 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.835827 | orchestrator | 00:01:32.835 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.835844 | orchestrator | 00:01:32.835 STDOUT terraform:  + config_drive = true 2025-09-17 00:01:32.835891 | orchestrator | 00:01:32.835 STDOUT terraform:  + created = (known after apply) 2025-09-17 00:01:32.835923 | orchestrator | 00:01:32.835 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 00:01:32.835955 | orchestrator | 00:01:32.835 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-17 00:01:32.835968 | orchestrator | 00:01:32.835 STDOUT terraform:  + force_delete = false 2025-09-17 00:01:32.836010 | orchestrator | 00:01:32.835 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 00:01:32.836046 | orchestrator | 00:01:32.836 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.836082 | orchestrator | 00:01:32.836 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.836116 | orchestrator | 00:01:32.836 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 00:01:32.836145 | orchestrator | 00:01:32.836 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 00:01:32.836178 | orchestrator | 00:01:32.836 STDOUT terraform:  + name = "testbed-manager" 2025-09-17 00:01:32.836205 | orchestrator | 00:01:32.836 STDOUT terraform:  + power_state = "active" 2025-09-17 00:01:32.836247 | orchestrator | 00:01:32.836 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.836282 | orchestrator | 00:01:32.836 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 00:01:32.836293 | orchestrator | 00:01:32.836 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 00:01:32.836340 | orchestrator | 00:01:32.836 STDOUT terraform:  + updated = (known after apply) 2025-09-17 00:01:32.836371 | orchestrator | 00:01:32.836 STDOUT terraform:  + user_data = (sensitive value) 2025-09-17 00:01:32.836381 | orchestrator | 00:01:32.836 STDOUT terraform:  + block_device { 2025-09-17 00:01:32.836409 | orchestrator | 00:01:32.836 STDOUT terraform:  + boot_index = 0 2025-09-17 00:01:32.836436 | orchestrator | 00:01:32.836 STDOUT terraform:  + delete_on_termination = false 2025-09-17 00:01:32.836466 | orchestrator | 00:01:32.836 STDOUT terraform:  + destination_type = "volume" 2025-09-17 00:01:32.836495 | orchestrator | 00:01:32.836 STDOUT terraform:  + multiattach = false 2025-09-17 00:01:32.836527 | orchestrator | 00:01:32.836 STDOUT terraform:  + source_type = "volume" 2025-09-17 00:01:32.836565 | orchestrator | 00:01:32.836 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.836576 | orchestrator | 00:01:32.836 STDOUT terraform:  } 2025-09-17 00:01:32.836585 | orchestrator | 00:01:32.836 STDOUT terraform:  + network { 2025-09-17 00:01:32.836609 | orchestrator | 00:01:32.836 STDOUT terraform:  + access_network = false 2025-09-17 00:01:32.836642 | orchestrator | 00:01:32.836 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 00:01:32.836674 | orchestrator | 00:01:32.836 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 00:01:32.836706 | orchestrator | 00:01:32.836 STDOUT terraform:  + mac = (known after apply) 2025-09-17 00:01:32.836739 | orchestrator | 00:01:32.836 STDOUT terraform:  + name = (known after apply) 2025-09-17 00:01:32.836769 | orchestrator | 00:01:32.836 STDOUT terraform:  + port = (known after apply) 2025-09-17 00:01:32.836801 | orchestrator | 00:01:32.836 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.836813 | orchestrator | 00:01:32.836 STDOUT terraform:  } 2025-09-17 00:01:32.836823 | orchestrator | 00:01:32.836 STDOUT terraform:  } 2025-09-17 00:01:32.836867 | orchestrator | 00:01:32.836 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-17 00:01:32.836910 | orchestrator | 00:01:32.836 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 00:01:32.836948 | orchestrator | 00:01:32.836 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 00:01:32.836987 | orchestrator | 00:01:32.836 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 00:01:32.837017 | orchestrator | 00:01:32.836 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 00:01:32.837054 | orchestrator | 00:01:32.837 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.837080 | orchestrator | 00:01:32.837 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.837090 | orchestrator | 00:01:32.837 STDOUT terraform:  + config_drive = true 2025-09-17 00:01:32.837133 | orchestrator | 00:01:32.837 STDOUT terraform:  + created = (known after apply) 2025-09-17 00:01:32.837169 | orchestrator | 00:01:32.837 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 00:01:32.837240 | orchestrator | 00:01:32.837 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 00:01:32.837249 | orchestrator | 00:01:32.837 STDOUT terraform:  + force_delete = false 2025-09-17 00:01:32.837261 | orchestrator | 00:01:32.837 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 00:01:32.837305 | orchestrator | 00:01:32.837 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.837340 | orchestrator | 00:01:32.837 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.837374 | orchestrator | 00:01:32.837 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 00:01:32.837388 | orchestrator | 00:01:32.837 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 00:01:32.837428 | orchestrator | 00:01:32.837 STDOUT terraform:  + name = "testbed-node-0" 2025-09-17 00:01:32.837454 | orchestrator | 00:01:32.837 STDOUT terraform:  + power_state = "active" 2025-09-17 00:01:32.837491 | orchestrator | 00:01:32.837 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.837526 | orchestrator | 00:01:32.837 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 00:01:32.837537 | orchestrator | 00:01:32.837 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 00:01:32.837580 | orchestrator | 00:01:32.837 STDOUT terraform:  + updated = (known after apply) 2025-09-17 00:01:32.837632 | orchestrator | 00:01:32.837 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 00:01:32.837644 | orchestrator | 00:01:32.837 STDOUT terraform:  + block_device { 2025-09-17 00:01:32.837657 | orchestrator | 00:01:32.837 STDOUT terraform:  + boot_index = 0 2025-09-17 00:01:32.837690 | orchestrator | 00:01:32.837 STDOUT terraform:  + delete_on_termination = false 2025-09-17 00:01:32.837721 | orchestrator | 00:01:32.837 STDOUT terraform:  + destination_type = "volume" 2025-09-17 00:01:32.837749 | orchestrator | 00:01:32.837 STDOUT terraform:  + multiattach = false 2025-09-17 00:01:32.837779 | orchestrator | 00:01:32.837 STDOUT terraform:  + source_type = "volume" 2025-09-17 00:01:32.837820 | orchestrator | 00:01:32.837 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.837829 | orchestrator | 00:01:32.837 STDOUT terraform:  } 2025-09-17 00:01:32.837837 | orchestrator | 00:01:32.837 STDOUT terraform:  + network { 2025-09-17 00:01:32.837862 | orchestrator | 00:01:32.837 STDOUT terraform:  + access_network = false 2025-09-17 00:01:32.837895 | orchestrator | 00:01:32.837 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 00:01:32.837925 | orchestrator | 00:01:32.837 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 00:01:32.837957 | orchestrator | 00:01:32.837 STDOUT terraform:  + mac = (known after apply) 2025-09-17 00:01:32.837989 | orchestrator | 00:01:32.837 STDOUT terraform:  + name = (known after apply) 2025-09-17 00:01:32.838052 | orchestrator | 00:01:32.837 STDOUT terraform:  + port = (known after apply) 2025-09-17 00:01:32.840270 | orchestrator | 00:01:32.838 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.840299 | orchestrator | 00:01:32.838 STDOUT terraform:  } 2025-09-17 00:01:32.840306 | orchestrator | 00:01:32.838 STDOUT terraform:  } 2025-09-17 00:01:32.840312 | orchestrator | 00:01:32.838 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-17 00:01:32.840318 | orchestrator | 00:01:32.838 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 00:01:32.840323 | orchestrator | 00:01:32.838 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 00:01:32.840328 | orchestrator | 00:01:32.838 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 00:01:32.840333 | orchestrator | 00:01:32.838 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 00:01:32.840338 | orchestrator | 00:01:32.838 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.840343 | orchestrator | 00:01:32.838 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.840347 | orchestrator | 00:01:32.838 STDOUT terraform:  + config_drive = true 2025-09-17 00:01:32.840352 | orchestrator | 00:01:32.838 STDOUT terraform:  + created = (known after apply) 2025-09-17 00:01:32.840357 | orchestrator | 00:01:32.838 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 00:01:32.840362 | orchestrator | 00:01:32.839 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 00:01:32.840367 | orchestrator | 00:01:32.839 STDOUT terraform:  + force_delete = false 2025-09-17 00:01:32.840377 | orchestrator | 00:01:32.839 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 00:01:32.840391 | orchestrator | 00:01:32.839 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.840396 | orchestrator | 00:01:32.839 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.840422 | orchestrator | 00:01:32.839 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 00:01:32.840428 | orchestrator | 00:01:32.839 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 00:01:32.840432 | orchestrator | 00:01:32.839 STDOUT terraform:  + name = "testbed-node-1" 2025-09-17 00:01:32.840437 | orchestrator | 00:01:32.839 STDOUT terraform:  + power_state = "active" 2025-09-17 00:01:32.840442 | orchestrator | 00:01:32.839 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.840446 | orchestrator | 00:01:32.839 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 00:01:32.840451 | orchestrator | 00:01:32.839 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 00:01:32.840456 | orchestrator | 00:01:32.839 STDOUT terraform:  + updated = (known after apply) 2025-09-17 00:01:32.840461 | orchestrator | 00:01:32.839 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 00:01:32.840466 | orchestrator | 00:01:32.839 STDOUT terraform:  + block_device { 2025-09-17 00:01:32.840471 | orchestrator | 00:01:32.839 STDOUT terraform:  + boot_index = 0 2025-09-17 00:01:32.840479 | orchestrator | 00:01:32.839 STDOUT terraform:  + delete_on_termination = false 2025-09-17 00:01:32.840483 | orchestrator | 00:01:32.839 STDOUT terraform:  + destination_type = "volume" 2025-09-17 00:01:32.840488 | orchestrator | 00:01:32.839 STDOUT terraform:  + multiattach = false 2025-09-17 00:01:32.840493 | orchestrator | 00:01:32.839 STDOUT terraform:  + source_type = "volume" 2025-09-17 00:01:32.840498 | orchestrator | 00:01:32.839 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.840502 | orchestrator | 00:01:32.839 STDOUT terraform:  } 2025-09-17 00:01:32.840507 | orchestrator | 00:01:32.839 STDOUT terraform:  + network { 2025-09-17 00:01:32.840512 | orchestrator | 00:01:32.839 STDOUT terraform:  + access_network = false 2025-09-17 00:01:32.840517 | orchestrator | 00:01:32.839 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 00:01:32.840533 | orchestrator | 00:01:32.839 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 00:01:32.840538 | orchestrator | 00:01:32.839 STDOUT terraform:  + mac = (known after apply) 2025-09-17 00:01:32.840543 | orchestrator | 00:01:32.839 STDOUT terraform:  + name = (known after apply) 2025-09-17 00:01:32.840547 | orchestrator | 00:01:32.839 STDOUT terraform:  + port = (known after apply) 2025-09-17 00:01:32.840552 | orchestrator | 00:01:32.839 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.840557 | orchestrator | 00:01:32.839 STDOUT terraform:  } 2025-09-17 00:01:32.840562 | orchestrator | 00:01:32.839 STDOUT terraform:  } 2025-09-17 00:01:32.840567 | orchestrator | 00:01:32.839 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-17 00:01:32.840571 | orchestrator | 00:01:32.839 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 00:01:32.840580 | orchestrator | 00:01:32.839 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 00:01:32.840585 | orchestrator | 00:01:32.839 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 00:01:32.840590 | orchestrator | 00:01:32.839 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 00:01:32.840595 | orchestrator | 00:01:32.839 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.840600 | orchestrator | 00:01:32.839 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.840605 | orchestrator | 00:01:32.839 STDOUT terraform:  + config_drive = true 2025-09-17 00:01:32.840609 | orchestrator | 00:01:32.840 STDOUT terraform:  + created = (known after apply) 2025-09-17 00:01:32.840614 | orchestrator | 00:01:32.840 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 00:01:32.840619 | orchestrator | 00:01:32.840 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 00:01:32.840624 | orchestrator | 00:01:32.840 STDOUT terraform:  + force_delete = false 2025-09-17 00:01:32.840628 | orchestrator | 00:01:32.840 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 00:01:32.840633 | orchestrator | 00:01:32.840 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.840638 | orchestrator | 00:01:32.840 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.840643 | orchestrator | 00:01:32.840 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 00:01:32.840647 | orchestrator | 00:01:32.840 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 00:01:32.840652 | orchestrator | 00:01:32.840 STDOUT terraform:  + name = "testbed-node-2" 2025-09-17 00:01:32.840657 | orchestrator | 00:01:32.840 STDOUT terraform:  + power_state = "active" 2025-09-17 00:01:32.840662 | orchestrator | 00:01:32.840 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.840666 | orchestrator | 00:01:32.840 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 00:01:32.840671 | orchestrator | 00:01:32.840 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 00:01:32.840676 | orchestrator | 00:01:32.840 STDOUT terraform:  + updated = (known after apply) 2025-09-17 00:01:32.840683 | orchestrator | 00:01:32.840 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 00:01:32.840688 | orchestrator | 00:01:32.840 STDOUT terraform:  + block_device { 2025-09-17 00:01:32.840693 | orchestrator | 00:01:32.840 STDOUT terraform:  + boot_index = 0 2025-09-17 00:01:32.840698 | orchestrator | 00:01:32.840 STDOUT terraform:  + delete_on_termination = false 2025-09-17 00:01:32.840706 | orchestrator | 00:01:32.840 STDOUT terraform:  + destination_type = "volume" 2025-09-17 00:01:32.840711 | orchestrator | 00:01:32.840 STDOUT terraform:  + multiattach = false 2025-09-17 00:01:32.840716 | orchestrator | 00:01:32.840 STDOUT terraform:  + source_type = "volume" 2025-09-17 00:01:32.840723 | orchestrator | 00:01:32.840 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.840733 | orchestrator | 00:01:32.840 STDOUT terraform:  } 2025-09-17 00:01:32.840740 | orchestrator | 00:01:32.840 STDOUT terraform:  + network { 2025-09-17 00:01:32.840762 | orchestrator | 00:01:32.840 STDOUT terraform:  + access_network = false 2025-09-17 00:01:32.840794 | orchestrator | 00:01:32.840 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 00:01:32.840829 | orchestrator | 00:01:32.840 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 00:01:32.840860 | orchestrator | 00:01:32.840 STDOUT terraform:  + mac = (known after apply) 2025-09-17 00:01:32.840895 | orchestrator | 00:01:32.840 STDOUT terraform:  + name = (known after apply) 2025-09-17 00:01:32.840930 | orchestrator | 00:01:32.840 STDOUT terraform:  + port = (known after apply) 2025-09-17 00:01:32.840972 | orchestrator | 00:01:32.840 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.840980 | orchestrator | 00:01:32.840 STDOUT terraform:  } 2025-09-17 00:01:32.840988 | orchestrator | 00:01:32.840 STDOUT terraform:  } 2025-09-17 00:01:32.841036 | orchestrator | 00:01:32.840 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-17 00:01:32.841078 | orchestrator | 00:01:32.841 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 00:01:32.841122 | orchestrator | 00:01:32.841 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 00:01:32.841157 | orchestrator | 00:01:32.841 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 00:01:32.841209 | orchestrator | 00:01:32.841 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 00:01:32.841245 | orchestrator | 00:01:32.841 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.841271 | orchestrator | 00:01:32.841 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.841295 | orchestrator | 00:01:32.841 STDOUT terraform:  + config_drive = true 2025-09-17 00:01:32.841330 | orchestrator | 00:01:32.841 STDOUT terraform:  + created = (known after apply) 2025-09-17 00:01:32.841365 | orchestrator | 00:01:32.841 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 00:01:32.841396 | orchestrator | 00:01:32.841 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 00:01:32.841420 | orchestrator | 00:01:32.841 STDOUT terraform:  + force_delete = false 2025-09-17 00:01:32.841456 | orchestrator | 00:01:32.841 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 00:01:32.841491 | orchestrator | 00:01:32.841 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.841526 | orchestrator | 00:01:32.841 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.841560 | orchestrator | 00:01:32.841 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 00:01:32.841587 | orchestrator | 00:01:32.841 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 00:01:32.841619 | orchestrator | 00:01:32.841 STDOUT terraform:  + name = "testbed-node-3" 2025-09-17 00:01:32.841643 | orchestrator | 00:01:32.841 STDOUT terraform:  + power_state = "active" 2025-09-17 00:01:32.841679 | orchestrator | 00:01:32.841 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.841717 | orchestrator | 00:01:32.841 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 00:01:32.841739 | orchestrator | 00:01:32.841 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 00:01:32.841776 | orchestrator | 00:01:32.841 STDOUT terraform:  + updated = (known after apply) 2025-09-17 00:01:32.841826 | orchestrator | 00:01:32.841 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 00:01:32.841835 | orchestrator | 00:01:32.841 STDOUT terraform:  + block_device { 2025-09-17 00:01:32.841864 | orchestrator | 00:01:32.841 STDOUT terraform:  + boot_index = 0 2025-09-17 00:01:32.841892 | orchestrator | 00:01:32.841 STDOUT terraform:  + delete_on_termination = false 2025-09-17 00:01:32.841921 | orchestrator | 00:01:32.841 STDOUT terraform:  + destination_type = "volume" 2025-09-17 00:01:32.841950 | orchestrator | 00:01:32.841 STDOUT terraform:  + multiattach = false 2025-09-17 00:01:32.841980 | orchestrator | 00:01:32.841 STDOUT terraform:  + source_type = "volume" 2025-09-17 00:01:32.842042 | orchestrator | 00:01:32.841 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.842051 | orchestrator | 00:01:32.842 STDOUT terraform:  } 2025-09-17 00:01:32.842058 | orchestrator | 00:01:32.842 STDOUT terraform:  + network { 2025-09-17 00:01:32.842065 | orchestrator | 00:01:32.842 STDOUT terraform:  + access_network = false 2025-09-17 00:01:32.842102 | orchestrator | 00:01:32.842 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 00:01:32.842135 | orchestrator | 00:01:32.842 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 00:01:32.842165 | orchestrator | 00:01:32.842 STDOUT terraform:  + mac = (known after apply) 2025-09-17 00:01:32.842238 | orchestrator | 00:01:32.842 STDOUT terraform:  + name = (known after apply) 2025-09-17 00:01:32.842246 | orchestrator | 00:01:32.842 STDOUT terraform:  + port = (known after apply) 2025-09-17 00:01:32.842253 | orchestrator | 00:01:32.842 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.842262 | orchestrator | 00:01:32.842 STDOUT terraform:  } 2025-09-17 00:01:32.842283 | orchestrator | 00:01:32.842 STDOUT terraform:  } 2025-09-17 00:01:32.842328 | orchestrator | 00:01:32.842 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-17 00:01:32.842370 | orchestrator | 00:01:32.842 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 00:01:32.842405 | orchestrator | 00:01:32.842 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 00:01:32.842440 | orchestrator | 00:01:32.842 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 00:01:32.842475 | orchestrator | 00:01:32.842 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 00:01:32.842513 | orchestrator | 00:01:32.842 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.842538 | orchestrator | 00:01:32.842 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.842552 | orchestrator | 00:01:32.842 STDOUT terraform:  + config_drive = true 2025-09-17 00:01:32.842596 | orchestrator | 00:01:32.842 STDOUT terraform:  + created = (known after apply) 2025-09-17 00:01:32.842625 | orchestrator | 00:01:32.842 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 00:01:32.842655 | orchestrator | 00:01:32.842 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 00:01:32.842680 | orchestrator | 00:01:32.842 STDOUT terraform:  + force_delete = false 2025-09-17 00:01:32.842723 | orchestrator | 00:01:32.842 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 00:01:32.842758 | orchestrator | 00:01:32.842 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.842793 | orchestrator | 00:01:32.842 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.842831 | orchestrator | 00:01:32.842 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 00:01:32.842853 | orchestrator | 00:01:32.842 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 00:01:32.842884 | orchestrator | 00:01:32.842 STDOUT terraform:  + name = "testbed-node-4" 2025-09-17 00:01:32.842910 | orchestrator | 00:01:32.842 STDOUT terraform:  + power_state = "active" 2025-09-17 00:01:32.842948 | orchestrator | 00:01:32.842 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.842981 | orchestrator | 00:01:32.842 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 00:01:32.843006 | orchestrator | 00:01:32.842 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 00:01:32.843041 | orchestrator | 00:01:32.842 STDOUT terraform:  + updated = (known after apply) 2025-09-17 00:01:32.843091 | orchestrator | 00:01:32.843 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 00:01:32.843100 | orchestrator | 00:01:32.843 STDOUT terraform:  + block_device { 2025-09-17 00:01:32.843128 | orchestrator | 00:01:32.843 STDOUT terraform:  + boot_index = 0 2025-09-17 00:01:32.843155 | orchestrator | 00:01:32.843 STDOUT terraform:  + delete_on_termination = false 2025-09-17 00:01:32.843200 | orchestrator | 00:01:32.843 STDOUT terraform:  + destination_type = "volume" 2025-09-17 00:01:32.843227 | orchestrator | 00:01:32.843 STDOUT terraform:  + multiattach = false 2025-09-17 00:01:32.843256 | orchestrator | 00:01:32.843 STDOUT terraform:  + source_type = "volume" 2025-09-17 00:01:32.843293 | orchestrator | 00:01:32.843 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.843301 | orchestrator | 00:01:32.843 STDOUT terraform:  } 2025-09-17 00:01:32.843321 | orchestrator | 00:01:32.843 STDOUT terraform:  + network { 2025-09-17 00:01:32.843345 | orchestrator | 00:01:32.843 STDOUT terraform:  + access_network = false 2025-09-17 00:01:32.843375 | orchestrator | 00:01:32.843 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 00:01:32.843407 | orchestrator | 00:01:32.843 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 00:01:32.843439 | orchestrator | 00:01:32.843 STDOUT terraform:  + mac = (known after apply) 2025-09-17 00:01:32.843472 | orchestrator | 00:01:32.843 STDOUT terraform:  + name = (known after apply) 2025-09-17 00:01:32.843504 | orchestrator | 00:01:32.843 STDOUT terraform:  + port = (known after apply) 2025-09-17 00:01:32.843537 | orchestrator | 00:01:32.843 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.843545 | orchestrator | 00:01:32.843 STDOUT terraform:  } 2025-09-17 00:01:32.843552 | orchestrator | 00:01:32.843 STDOUT terraform:  } 2025-09-17 00:01:32.843600 | orchestrator | 00:01:32.843 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-17 00:01:32.843642 | orchestrator | 00:01:32.843 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 00:01:32.843678 | orchestrator | 00:01:32.843 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 00:01:32.843713 | orchestrator | 00:01:32.843 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 00:01:32.843757 | orchestrator | 00:01:32.843 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 00:01:32.843787 | orchestrator | 00:01:32.843 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.850067 | orchestrator | 00:01:32.843 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 00:01:32.850088 | orchestrator | 00:01:32.843 STDOUT terraform:  + config_drive = true 2025-09-17 00:01:32.850092 | orchestrator | 00:01:32.843 STDOUT terraform:  + created = (known after apply) 2025-09-17 00:01:32.850096 | orchestrator | 00:01:32.843 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 00:01:32.850100 | orchestrator | 00:01:32.843 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 00:01:32.850104 | orchestrator | 00:01:32.843 STDOUT terraform:  + force_delete = false 2025-09-17 00:01:32.850108 | orchestrator | 00:01:32.843 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 00:01:32.850111 | orchestrator | 00:01:32.843 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850115 | orchestrator | 00:01:32.844 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 00:01:32.850119 | orchestrator | 00:01:32.844 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 00:01:32.850123 | orchestrator | 00:01:32.845 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 00:01:32.850127 | orchestrator | 00:01:32.845 STDOUT terraform:  + name = "testbed-node-5" 2025-09-17 00:01:32.850130 | orchestrator | 00:01:32.845 STDOUT terraform:  + power_state = "active" 2025-09-17 00:01:32.850134 | orchestrator | 00:01:32.845 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850138 | orchestrator | 00:01:32.845 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 00:01:32.850141 | orchestrator | 00:01:32.845 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 00:01:32.850145 | orchestrator | 00:01:32.845 STDOUT terraform:  + updated = (known after apply) 2025-09-17 00:01:32.850149 | orchestrator | 00:01:32.845 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 00:01:32.850153 | orchestrator | 00:01:32.845 STDOUT terraform:  + block_device { 2025-09-17 00:01:32.850166 | orchestrator | 00:01:32.845 STDOUT terraform:  + boot_index = 0 2025-09-17 00:01:32.850170 | orchestrator | 00:01:32.845 STDOUT terraform:  + delete_on_termination = false 2025-09-17 00:01:32.850174 | orchestrator | 00:01:32.845 STDOUT terraform:  + destination_type = "volume" 2025-09-17 00:01:32.850177 | orchestrator | 00:01:32.845 STDOUT terraform:  + multiattach = false 2025-09-17 00:01:32.850181 | orchestrator | 00:01:32.845 STDOUT terraform:  + source_type = "volume" 2025-09-17 00:01:32.850202 | orchestrator | 00:01:32.845 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.850206 | orchestrator | 00:01:32.845 STDOUT terraform:  } 2025-09-17 00:01:32.850210 | orchestrator | 00:01:32.845 STDOUT terraform:  + network { 2025-09-17 00:01:32.850214 | orchestrator | 00:01:32.845 STDOUT terraform:  + access_network = false 2025-09-17 00:01:32.850217 | orchestrator | 00:01:32.845 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 00:01:32.850221 | orchestrator | 00:01:32.845 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 00:01:32.850225 | orchestrator | 00:01:32.845 STDOUT terraform:  + mac = (known after apply) 2025-09-17 00:01:32.850229 | orchestrator | 00:01:32.845 STDOUT terraform:  + name = (known after apply) 2025-09-17 00:01:32.850238 | orchestrator | 00:01:32.845 STDOUT terraform:  + port = (known after apply) 2025-09-17 00:01:32.850242 | orchestrator | 00:01:32.845 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 00:01:32.850245 | orchestrator | 00:01:32.845 STDOUT terraform:  } 2025-09-17 00:01:32.850249 | orchestrator | 00:01:32.845 STDOUT terraform:  } 2025-09-17 00:01:32.850253 | orchestrator | 00:01:32.845 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-17 00:01:32.850257 | orchestrator | 00:01:32.845 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-17 00:01:32.850270 | orchestrator | 00:01:32.845 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-17 00:01:32.850273 | orchestrator | 00:01:32.845 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850277 | orchestrator | 00:01:32.845 STDOUT terraform:  + name = "testbed" 2025-09-17 00:01:32.850281 | orchestrator | 00:01:32.845 STDOUT terraform:  + private_key = (sensitive value) 2025-09-17 00:01:32.850307 | orchestrator | 00:01:32.845 STDOUT terraform:  + public_key = (known after apply) 2025-09-17 00:01:32.850312 | orchestrator | 00:01:32.845 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850319 | orchestrator | 00:01:32.845 STDOUT terraform:  + user_id = (known after apply) 2025-09-17 00:01:32.850323 | orchestrator | 00:01:32.845 STDOUT terraform:  } 2025-09-17 00:01:32.850327 | orchestrator | 00:01:32.845 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-17 00:01:32.850331 | orchestrator | 00:01:32.845 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 00:01:32.850335 | orchestrator | 00:01:32.845 STDOUT terraform:  + device = (known after apply) 2025-09-17 00:01:32.850339 | orchestrator | 00:01:32.845 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850347 | orchestrator | 00:01:32.845 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 00:01:32.850351 | orchestrator | 00:01:32.845 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850354 | orchestrator | 00:01:32.846 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 00:01:32.850358 | orchestrator | 00:01:32.846 STDOUT terraform:  } 2025-09-17 00:01:32.850362 | orchestrator | 00:01:32.846 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-17 00:01:32.850366 | orchestrator | 00:01:32.846 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 00:01:32.850369 | orchestrator | 00:01:32.846 STDOUT terraform:  + device = (known after apply) 2025-09-17 00:01:32.850373 | orchestrator | 00:01:32.846 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850377 | orchestrator | 00:01:32.846 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 00:01:32.850380 | orchestrator | 00:01:32.846 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850384 | orchestrator | 00:01:32.846 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 00:01:32.850388 | orchestrator | 00:01:32.846 STDOUT terraform:  } 2025-09-17 00:01:32.850392 | orchestrator | 00:01:32.846 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-17 00:01:32.850395 | orchestrator | 00:01:32.846 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 00:01:32.850399 | orchestrator | 00:01:32.846 STDOUT terraform:  + device = (known after apply) 2025-09-17 00:01:32.850403 | orchestrator | 00:01:32.846 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850407 | orchestrator | 00:01:32.846 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 00:01:32.850410 | orchestrator | 00:01:32.846 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850414 | orchestrator | 00:01:32.846 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 00:01:32.850418 | orchestrator | 00:01:32.846 STDOUT terraform:  } 2025-09-17 00:01:32.850422 | orchestrator | 00:01:32.846 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-17 00:01:32.850425 | orchestrator | 00:01:32.847 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 00:01:32.850429 | orchestrator | 00:01:32.847 STDOUT terraform:  + device = (known after apply) 2025-09-17 00:01:32.850433 | orchestrator | 00:01:32.847 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850437 | orchestrator | 00:01:32.847 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 00:01:32.850447 | orchestrator | 00:01:32.847 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850451 | orchestrator | 00:01:32.847 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 00:01:32.850455 | orchestrator | 00:01:32.847 STDOUT terraform:  } 2025-09-17 00:01:32.850459 | orchestrator | 00:01:32.847 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-17 00:01:32.850466 | orchestrator | 00:01:32.847 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 00:01:32.850470 | orchestrator | 00:01:32.847 STDOUT terraform:  + device = (known after apply) 2025-09-17 00:01:32.850474 | orchestrator | 00:01:32.847 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850478 | orchestrator | 00:01:32.847 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 00:01:32.850481 | orchestrator | 00:01:32.847 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850485 | orchestrator | 00:01:32.847 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 00:01:32.850489 | orchestrator | 00:01:32.847 STDOUT terraform:  } 2025-09-17 00:01:32.850493 | orchestrator | 00:01:32.847 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-17 00:01:32.850496 | orchestrator | 00:01:32.847 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 00:01:32.850500 | orchestrator | 00:01:32.847 STDOUT terraform:  + device = (known after apply) 2025-09-17 00:01:32.850504 | orchestrator | 00:01:32.847 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850508 | orchestrator | 00:01:32.847 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 00:01:32.850511 | orchestrator | 00:01:32.847 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850515 | orchestrator | 00:01:32.847 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 00:01:32.850519 | orchestrator | 00:01:32.847 STDOUT terraform:  } 2025-09-17 00:01:32.850522 | orchestrator | 00:01:32.847 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-17 00:01:32.850526 | orchestrator | 00:01:32.847 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 00:01:32.850530 | orchestrator | 00:01:32.847 STDOUT terraform:  + device = (known after apply) 2025-09-17 00:01:32.850534 | orchestrator | 00:01:32.847 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850537 | orchestrator | 00:01:32.847 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 00:01:32.850541 | orchestrator | 00:01:32.847 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850545 | orchestrator | 00:01:32.847 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 00:01:32.850549 | orchestrator | 00:01:32.847 STDOUT terraform:  } 2025-09-17 00:01:32.850552 | orchestrator | 00:01:32.847 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-17 00:01:32.850556 | orchestrator | 00:01:32.847 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 00:01:32.850560 | orchestrator | 00:01:32.847 STDOUT terraform:  + device = (known after apply) 2025-09-17 00:01:32.850563 | orchestrator | 00:01:32.847 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850567 | orchestrator | 00:01:32.848 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 00:01:32.850571 | orchestrator | 00:01:32.848 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850578 | orchestrator | 00:01:32.848 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 00:01:32.850582 | orchestrator | 00:01:32.848 STDOUT terraform:  } 2025-09-17 00:01:32.850586 | orchestrator | 00:01:32.848 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-17 00:01:32.850590 | orchestrator | 00:01:32.848 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 00:01:32.850600 | orchestrator | 00:01:32.848 STDOUT terraform:  + device = (known after apply) 2025-09-17 00:01:32.850604 | orchestrator | 00:01:32.848 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850608 | orchestrator | 00:01:32.848 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 00:01:32.850612 | orchestrator | 00:01:32.848 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850615 | orchestrator | 00:01:32.848 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 00:01:32.850619 | orchestrator | 00:01:32.848 STDOUT terraform:  } 2025-09-17 00:01:32.850626 | orchestrator | 00:01:32.848 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-17 00:01:32.850631 | orchestrator | 00:01:32.848 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-17 00:01:32.850635 | orchestrator | 00:01:32.848 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-17 00:01:32.850639 | orchestrator | 00:01:32.848 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-17 00:01:32.850643 | orchestrator | 00:01:32.848 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850647 | orchestrator | 00:01:32.848 STDOUT terraform:  + port_id = (known after apply) 2025-09-17 00:01:32.850650 | orchestrator | 00:01:32.848 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850654 | orchestrator | 00:01:32.848 STDOUT terraform:  } 2025-09-17 00:01:32.850658 | orchestrator | 00:01:32.848 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-17 00:01:32.850662 | orchestrator | 00:01:32.848 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-17 00:01:32.850666 | orchestrator | 00:01:32.848 STDOUT terraform:  + address = (known after apply) 2025-09-17 00:01:32.850670 | orchestrator | 00:01:32.848 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.850674 | orchestrator | 00:01:32.848 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-17 00:01:32.850678 | orchestrator | 00:01:32.848 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 00:01:32.850681 | orchestrator | 00:01:32.848 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-17 00:01:32.850685 | orchestrator | 00:01:32.848 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850689 | orchestrator | 00:01:32.848 STDOUT terraform:  + pool = "public" 2025-09-17 00:01:32.850693 | orchestrator | 00:01:32.848 STDOUT terraform:  + port_id = (known after apply) 2025-09-17 00:01:32.850696 | orchestrator | 00:01:32.848 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850700 | orchestrator | 00:01:32.848 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 00:01:32.850707 | orchestrator | 00:01:32.848 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.850711 | orchestrator | 00:01:32.848 STDOUT terraform:  } 2025-09-17 00:01:32.850715 | orchestrator | 00:01:32.848 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-17 00:01:32.850719 | orchestrator | 00:01:32.848 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-17 00:01:32.850723 | orchestrator | 00:01:32.848 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 00:01:32.850726 | orchestrator | 00:01:32.848 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.850730 | orchestrator | 00:01:32.848 STDOUT terraform:  + availability_zone_hints = [ 2025-09-17 00:01:32.850734 | orchestrator | 00:01:32.848 STDOUT terraform:  + "nova", 2025-09-17 00:01:32.850738 | orchestrator | 00:01:32.848 STDOUT terraform:  ] 2025-09-17 00:01:32.850742 | orchestrator | 00:01:32.848 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-17 00:01:32.850745 | orchestrator | 00:01:32.849 STDOUT terraform:  + external = (known after apply) 2025-09-17 00:01:32.850749 | orchestrator | 00:01:32.849 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850758 | orchestrator | 00:01:32.849 STDOUT terraform:  + mtu = (known after apply) 2025-09-17 00:01:32.850762 | orchestrator | 00:01:32.849 STDOUT terraform:  + name = "net-testbed-management" 2025-09-17 00:01:32.850766 | orchestrator | 00:01:32.849 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 00:01:32.850770 | orchestrator | 00:01:32.849 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 00:01:32.850773 | orchestrator | 00:01:32.849 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850780 | orchestrator | 00:01:32.849 STDOUT terraform:  + shared = (known after apply) 2025-09-17 00:01:32.850784 | orchestrator | 00:01:32.849 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.850788 | orchestrator | 00:01:32.849 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-17 00:01:32.850791 | orchestrator | 00:01:32.849 STDOUT terraform:  + segments (known after apply) 2025-09-17 00:01:32.850795 | orchestrator | 00:01:32.849 STDOUT terraform:  } 2025-09-17 00:01:32.850799 | orchestrator | 00:01:32.849 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-17 00:01:32.850803 | orchestrator | 00:01:32.849 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-17 00:01:32.850806 | orchestrator | 00:01:32.849 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 00:01:32.850810 | orchestrator | 00:01:32.849 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 00:01:32.850814 | orchestrator | 00:01:32.849 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 00:01:32.850818 | orchestrator | 00:01:32.849 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.850821 | orchestrator | 00:01:32.849 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 00:01:32.850831 | orchestrator | 00:01:32.849 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 00:01:32.850835 | orchestrator | 00:01:32.849 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 00:01:32.850839 | orchestrator | 00:01:32.849 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 00:01:32.850842 | orchestrator | 00:01:32.849 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850846 | orchestrator | 00:01:32.849 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 00:01:32.850850 | orchestrator | 00:01:32.849 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 00:01:32.850858 | orchestrator | 00:01:32.849 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 00:01:32.850862 | orchestrator | 00:01:32.849 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 00:01:32.850866 | orchestrator | 00:01:32.849 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850870 | orchestrator | 00:01:32.849 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 00:01:32.850873 | orchestrator | 00:01:32.849 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.850877 | orchestrator | 00:01:32.849 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.850881 | orchestrator | 00:01:32.849 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 00:01:32.850885 | orchestrator | 00:01:32.849 STDOUT terraform:  } 2025-09-17 00:01:32.850888 | orchestrator | 00:01:32.850 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.850892 | orchestrator | 00:01:32.850 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 00:01:32.850896 | orchestrator | 00:01:32.850 STDOUT terraform:  } 2025-09-17 00:01:32.850900 | orchestrator | 00:01:32.850 STDOUT terraform:  + binding (known after apply) 2025-09-17 00:01:32.850903 | orchestrator | 00:01:32.850 STDOUT terraform:  + fixed_ip { 2025-09-17 00:01:32.850907 | orchestrator | 00:01:32.850 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-17 00:01:32.850915 | orchestrator | 00:01:32.850 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 00:01:32.850918 | orchestrator | 00:01:32.850 STDOUT terraform:  } 2025-09-17 00:01:32.850922 | orchestrator | 00:01:32.850 STDOUT terraform:  } 2025-09-17 00:01:32.850926 | orchestrator | 00:01:32.850 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-17 00:01:32.850930 | orchestrator | 00:01:32.850 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 00:01:32.850934 | orchestrator | 00:01:32.850 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 00:01:32.850937 | orchestrator | 00:01:32.850 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 00:01:32.850941 | orchestrator | 00:01:32.850 STDOUT terraform:  + all_security_grou 2025-09-17 00:01:32.850945 | orchestrator | 00:01:32.850 STDOUT terraform: p_ids = (known after apply) 2025-09-17 00:01:32.850949 | orchestrator | 00:01:32.850 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.850956 | orchestrator | 00:01:32.850 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 00:01:32.850960 | orchestrator | 00:01:32.850 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 00:01:32.850963 | orchestrator | 00:01:32.850 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 00:01:32.850967 | orchestrator | 00:01:32.850 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 00:01:32.850971 | orchestrator | 00:01:32.850 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.850978 | orchestrator | 00:01:32.850 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 00:01:32.850981 | orchestrator | 00:01:32.850 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 00:01:32.850985 | orchestrator | 00:01:32.850 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 00:01:32.850989 | orchestrator | 00:01:32.850 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 00:01:32.850993 | orchestrator | 00:01:32.850 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.850996 | orchestrator | 00:01:32.850 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 00:01:32.851000 | orchestrator | 00:01:32.850 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.851004 | orchestrator | 00:01:32.850 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.851008 | orchestrator | 00:01:32.850 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 00:01:32.851011 | orchestrator | 00:01:32.850 STDOUT terraform:  } 2025-09-17 00:01:32.851015 | orchestrator | 00:01:32.850 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.851019 | orchestrator | 00:01:32.850 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 00:01:32.851025 | orchestrator | 00:01:32.850 STDOUT terraform:  } 2025-09-17 00:01:32.851029 | orchestrator | 00:01:32.850 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.851033 | orchestrator | 00:01:32.850 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 00:01:32.851036 | orchestrator | 00:01:32.850 STDOUT terraform:  } 2025-09-17 00:01:32.851040 | orchestrator | 00:01:32.850 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.851044 | orchestrator | 00:01:32.850 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 00:01:32.851048 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-17 00:01:32.851054 | orchestrator | 00:01:32.851 STDOUT terraform:  + binding (known after apply) 2025-09-17 00:01:32.851057 | orchestrator | 00:01:32.851 STDOUT terraform:  + fixed_ip { 2025-09-17 00:01:32.851063 | orchestrator | 00:01:32.851 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-17 00:01:32.853018 | orchestrator | 00:01:32.851 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 00:01:32.853034 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-17 00:01:32.853039 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-17 00:01:32.853043 | orchestrator | 00:01:32.851 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-17 00:01:32.853057 | orchestrator | 00:01:32.851 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 00:01:32.853061 | orchestrator | 00:01:32.851 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 00:01:32.853065 | orchestrator | 00:01:32.851 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 00:01:32.853075 | orchestrator | 00:01:32.851 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 00:01:32.853079 | orchestrator | 00:01:32.851 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.853082 | orchestrator | 00:01:32.851 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 00:01:32.853086 | orchestrator | 00:01:32.851 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 00:01:32.853090 | orchestrator | 00:01:32.851 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 00:01:32.853094 | orchestrator | 00:01:32.851 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 00:01:32.853098 | orchestrator | 00:01:32.851 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.853101 | orchestrator | 00:01:32.851 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 00:01:32.853105 | orchestrator | 00:01:32.851 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 00:01:32.853109 | orchestrator | 00:01:32.851 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 00:01:32.853113 | orchestrator | 00:01:32.851 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 00:01:32.853117 | orchestrator | 00:01:32.851 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.853120 | orchestrator | 00:01:32.851 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 00:01:32.853124 | orchestrator | 00:01:32.851 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.853128 | orchestrator | 00:01:32.851 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853132 | orchestrator | 00:01:32.851 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 00:01:32.853136 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-17 00:01:32.853140 | orchestrator | 00:01:32.851 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853143 | orchestrator | 00:01:32.851 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 00:01:32.853147 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-17 00:01:32.853151 | orchestrator | 00:01:32.851 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853155 | orchestrator | 00:01:32.851 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 00:01:32.853159 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-17 00:01:32.853163 | orchestrator | 00:01:32.851 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853166 | orchestrator | 00:01:32.851 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 00:01:32.853170 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-17 00:01:32.853174 | orchestrator | 00:01:32.851 STDOUT terraform:  + binding (known after apply) 2025-09-17 00:01:32.853202 | orchestrator | 00:01:32.851 STDOUT terraform:  + fixed_ip { 2025-09-17 00:01:32.853206 | orchestrator | 00:01:32.851 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-17 00:01:32.853210 | orchestrator | 00:01:32.851 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 00:01:32.853213 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-17 00:01:32.853217 | orchestrator | 00:01:32.851 STDOUT terraform:  } 2025-09-17 00:01:32.853228 | orchestrator | 00:01:32.851 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-17 00:01:32.853232 | orchestrator | 00:01:32.852 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 00:01:32.853236 | orchestrator | 00:01:32.852 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 00:01:32.853239 | orchestrator | 00:01:32.852 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 00:01:32.853243 | orchestrator | 00:01:32.852 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 00:01:32.853247 | orchestrator | 00:01:32.852 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.853251 | orchestrator | 00:01:32.852 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 00:01:32.853255 | orchestrator | 00:01:32.852 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 00:01:32.853259 | orchestrator | 00:01:32.852 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 00:01:32.853263 | orchestrator | 00:01:32.852 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 00:01:32.853267 | orchestrator | 00:01:32.852 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.853270 | orchestrator | 00:01:32.852 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 00:01:32.853274 | orchestrator | 00:01:32.852 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 00:01:32.853278 | orchestrator | 00:01:32.852 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 00:01:32.853282 | orchestrator | 00:01:32.852 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 00:01:32.853286 | orchestrator | 00:01:32.852 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.853289 | orchestrator | 00:01:32.852 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 00:01:32.853293 | orchestrator | 00:01:32.852 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.853297 | orchestrator | 00:01:32.852 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853301 | orchestrator | 00:01:32.852 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 00:01:32.853305 | orchestrator | 00:01:32.852 STDOUT terraform:  } 2025-09-17 00:01:32.853308 | orchestrator | 00:01:32.852 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853312 | orchestrator | 00:01:32.852 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 00:01:32.853316 | orchestrator | 00:01:32.852 STDOUT terraform:  } 2025-09-17 00:01:32.853320 | orchestrator | 00:01:32.852 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853328 | orchestrator | 00:01:32.852 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 00:01:32.853332 | orchestrator | 00:01:32.852 STDOUT terraform:  } 2025-09-17 00:01:32.853336 | orchestrator | 00:01:32.852 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853339 | orchestrator | 00:01:32.852 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 00:01:32.853343 | orchestrator | 00:01:32.852 STDOUT terraform:  } 2025-09-17 00:01:32.853347 | orchestrator | 00:01:32.852 STDOUT terraform:  + binding (known after apply) 2025-09-17 00:01:32.853351 | orchestrator | 00:01:32.852 STDOUT terraform:  + fixed_ip { 2025-09-17 00:01:32.853355 | orchestrator | 00:01:32.852 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-17 00:01:32.853358 | orchestrator | 00:01:32.852 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 00:01:32.853362 | orchestrator | 00:01:32.852 STDOUT terraform:  } 2025-09-17 00:01:32.853366 | orchestrator | 00:01:32.852 STDOUT terraform:  } 2025-09-17 00:01:32.853370 | orchestrator | 00:01:32.852 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-17 00:01:32.853374 | orchestrator | 00:01:32.852 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 00:01:32.853382 | orchestrator | 00:01:32.852 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 00:01:32.853385 | orchestrator | 00:01:32.852 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 00:01:32.853389 | orchestrator | 00:01:32.852 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 00:01:32.853393 | orchestrator | 00:01:32.853 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.853397 | orchestrator | 00:01:32.853 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 00:01:32.853401 | orchestrator | 00:01:32.853 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 00:01:32.853404 | orchestrator | 00:01:32.853 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 00:01:32.853412 | orchestrator | 00:01:32.853 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 00:01:32.853419 | orchestrator | 00:01:32.853 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.853423 | orchestrator | 00:01:32.853 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 00:01:32.853426 | orchestrator | 00:01:32.853 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 00:01:32.853430 | orchestrator | 00:01:32.853 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 00:01:32.853434 | orchestrator | 00:01:32.853 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 00:01:32.853438 | orchestrator | 00:01:32.853 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.853444 | orchestrator | 00:01:32.853 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 00:01:32.853448 | orchestrator | 00:01:32.853 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.853453 | orchestrator | 00:01:32.853 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853547 | orchestrator | 00:01:32.853 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 00:01:32.853552 | orchestrator | 00:01:32.853 STDOUT terraform:  } 2025-09-17 00:01:32.853556 | orchestrator | 00:01:32.853 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853560 | orchestrator | 00:01:32.853 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 00:01:32.853564 | orchestrator | 00:01:32.853 STDOUT terraform:  } 2025-09-17 00:01:32.853570 | orchestrator | 00:01:32.853 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853575 | orchestrator | 00:01:32.853 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 00:01:32.853605 | orchestrator | 00:01:32.853 STDOUT terraform:  } 2025-09-17 00:01:32.853612 | orchestrator | 00:01:32.853 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.853795 | orchestrator | 00:01:32.853 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 00:01:32.853803 | orchestrator | 00:01:32.853 STDOUT terraform:  } 2025-09-17 00:01:32.853807 | orchestrator | 00:01:32.853 STDOUT terraform:  + binding (known after apply) 2025-09-17 00:01:32.853810 | orchestrator | 00:01:32.853 STDOUT terraform:  + fixed_ip { 2025-09-17 00:01:32.853814 | orchestrator | 00:01:32.853 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-17 00:01:32.853819 | orchestrator | 00:01:32.853 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 00:01:32.853822 | orchestrator | 00:01:32.853 STDOUT terraform:  } 2025-09-17 00:01:32.853826 | orchestrator | 00:01:32.853 STDOUT terraform:  } 2025-09-17 00:01:32.853830 | orchestrator | 00:01:32.853 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-17 00:01:32.853846 | orchestrator | 00:01:32.853 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 00:01:32.853852 | orchestrator | 00:01:32.853 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 00:01:32.858095 | orchestrator | 00:01:32.853 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 00:01:32.858112 | orchestrator | 00:01:32.853 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 00:01:32.858116 | orchestrator | 00:01:32.853 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.858121 | orchestrator | 00:01:32.853 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 00:01:32.858125 | orchestrator | 00:01:32.853 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 00:01:32.858130 | orchestrator | 00:01:32.853 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 00:01:32.858134 | orchestrator | 00:01:32.854 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 00:01:32.858139 | orchestrator | 00:01:32.854 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.858144 | orchestrator | 00:01:32.854 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 00:01:32.858153 | orchestrator | 00:01:32.854 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 00:01:32.858157 | orchestrator | 00:01:32.854 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 00:01:32.858168 | orchestrator | 00:01:32.854 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 00:01:32.858171 | orchestrator | 00:01:32.854 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.858175 | orchestrator | 00:01:32.854 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 00:01:32.858179 | orchestrator | 00:01:32.854 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.858183 | orchestrator | 00:01:32.854 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.858196 | orchestrator | 00:01:32.854 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 00:01:32.858201 | orchestrator | 00:01:32.854 STDOUT terraform:  } 2025-09-17 00:01:32.858204 | orchestrator | 00:01:32.854 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.858208 | orchestrator | 00:01:32.854 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 00:01:32.858212 | orchestrator | 00:01:32.854 STDOUT terraform:  } 2025-09-17 00:01:32.858216 | orchestrator | 00:01:32.854 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.858220 | orchestrator | 00:01:32.854 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 00:01:32.858223 | orchestrator | 00:01:32.854 STDOUT terraform:  } 2025-09-17 00:01:32.858227 | orchestrator | 00:01:32.854 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.858231 | orchestrator | 00:01:32.854 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 00:01:32.858235 | orchestrator | 00:01:32.854 STDOUT terraform:  } 2025-09-17 00:01:32.858238 | orchestrator | 00:01:32.854 STDOUT terraform:  + binding (known after apply) 2025-09-17 00:01:32.858242 | orchestrator | 00:01:32.854 STDOUT terraform:  + fixed_ip { 2025-09-17 00:01:32.858246 | orchestrator | 00:01:32.854 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-17 00:01:32.858250 | orchestrator | 00:01:32.854 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 00:01:32.858253 | orchestrator | 00:01:32.854 STDOUT terraform:  } 2025-09-17 00:01:32.858257 | orchestrator | 00:01:32.854 STDOUT terraform:  } 2025-09-17 00:01:32.858261 | orchestrator | 00:01:32.854 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-17 00:01:32.858265 | orchestrator | 00:01:32.854 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 00:01:32.858269 | orchestrator | 00:01:32.854 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 00:01:32.858273 | orchestrator | 00:01:32.854 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 00:01:32.858277 | orchestrator | 00:01:32.854 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 00:01:32.858286 | orchestrator | 00:01:32.854 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.858290 | orchestrator | 00:01:32.854 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 00:01:32.858293 | orchestrator | 00:01:32.854 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 00:01:32.858297 | orchestrator | 00:01:32.854 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 00:01:32.858304 | orchestrator | 00:01:32.854 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 00:01:32.858308 | orchestrator | 00:01:32.854 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.858312 | orchestrator | 00:01:32.854 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 00:01:32.858316 | orchestrator | 00:01:32.854 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 00:01:32.858319 | orchestrator | 00:01:32.855 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 00:01:32.858323 | orchestrator | 00:01:32.855 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 00:01:32.858330 | orchestrator | 00:01:32.855 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.858334 | orchestrator | 00:01:32.855 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 00:01:32.858338 | orchestrator | 00:01:32.855 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.858341 | orchestrator | 00:01:32.855 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.858345 | orchestrator | 00:01:32.855 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 00:01:32.858349 | orchestrator | 00:01:32.855 STDOUT terraform:  } 2025-09-17 00:01:32.858353 | orchestrator | 00:01:32.855 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.858356 | orchestrator | 00:01:32.855 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 00:01:32.858360 | orchestrator | 00:01:32.855 STDOUT terraform:  } 2025-09-17 00:01:32.858364 | orchestrator | 00:01:32.855 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.858368 | orchestrator | 00:01:32.855 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 00:01:32.858371 | orchestrator | 00:01:32.855 STDOUT terraform:  } 2025-09-17 00:01:32.858375 | orchestrator | 00:01:32.855 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 00:01:32.858379 | orchestrator | 00:01:32.855 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 00:01:32.858383 | orchestrator | 00:01:32.855 STDOUT terraform:  } 2025-09-17 00:01:32.858386 | orchestrator | 00:01:32.855 STDOUT terraform:  + binding (known after apply) 2025-09-17 00:01:32.858390 | orchestrator | 00:01:32.855 STDOUT terraform:  + fixed_ip { 2025-09-17 00:01:32.858394 | orchestrator | 00:01:32.855 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-17 00:01:32.858398 | orchestrator | 00:01:32.855 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 00:01:32.858401 | orchestrator | 00:01:32.855 STDOUT terraform:  } 2025-09-17 00:01:32.858405 | orchestrator | 00:01:32.855 STDOUT terraform:  } 2025-09-17 00:01:32.858409 | orchestrator | 00:01:32.855 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-17 00:01:32.858413 | orchestrator | 00:01:32.855 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-17 00:01:32.858417 | orchestrator | 00:01:32.855 STDOUT terraform:  + force_destroy = false 2025-09-17 00:01:32.858420 | orchestrator | 00:01:32.855 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.858436 | orchestrator | 00:01:32.855 STDOUT terraform:  + port_id = (known after apply) 2025-09-17 00:01:32.858440 | orchestrator | 00:01:32.855 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.858443 | orchestrator | 00:01:32.855 STDOUT terraform:  + router_id = (known after apply) 2025-09-17 00:01:32.858447 | orchestrator | 00:01:32.855 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 00:01:32.858451 | orchestrator | 00:01:32.855 STDOUT terraform:  } 2025-09-17 00:01:32.858459 | orchestrator | 00:01:32.855 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-17 00:01:32.858463 | orchestrator | 00:01:32.855 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-17 00:01:32.858467 | orchestrator | 00:01:32.855 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 00:01:32.858470 | orchestrator | 00:01:32.855 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.858475 | orchestrator | 00:01:32.855 STDOUT terraform:  + availability_zone_hints = [ 2025-09-17 00:01:32.858479 | orchestrator | 00:01:32.855 STDOUT terraform:  + "nova", 2025-09-17 00:01:32.858483 | orchestrator | 00:01:32.855 STDOUT terraform:  ] 2025-09-17 00:01:32.858487 | orchestrator | 00:01:32.855 STDOUT terraform:  + distributed = (known after apply) 2025-09-17 00:01:32.858491 | orchestrator | 00:01:32.855 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-17 00:01:32.858495 | orchestrator | 00:01:32.855 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-17 00:01:32.858498 | orchestrator | 00:01:32.855 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-17 00:01:32.858508 | orchestrator | 00:01:32.855 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.858512 | orchestrator | 00:01:32.855 STDOUT terraform:  + name = "testbed" 2025-09-17 00:01:32.858516 | orchestrator | 00:01:32.856 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.858520 | orchestrator | 00:01:32.856 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.858523 | orchestrator | 00:01:32.856 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-17 00:01:32.858527 | orchestrator | 00:01:32.856 STDOUT terraform:  } 2025-09-17 00:01:32.858531 | orchestrator | 00:01:32.856 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-17 00:01:32.858536 | orchestrator | 00:01:32.856 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-17 00:01:32.858539 | orchestrator | 00:01:32.856 STDOUT terraform:  + description = "ssh" 2025-09-17 00:01:32.858543 | orchestrator | 00:01:32.856 STDOUT terraform:  + direction = "ingress" 2025-09-17 00:01:32.858547 | orchestrator | 00:01:32.856 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 00:01:32.858551 | orchestrator | 00:01:32.856 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.858554 | orchestrator | 00:01:32.856 STDOUT terraform:  + port_range_max = 22 2025-09-17 00:01:32.858561 | orchestrator | 00:01:32.856 STDOUT terraform:  + port_range_min = 22 2025-09-17 00:01:32.858565 | orchestrator | 00:01:32.856 STDOUT terraform:  + protocol = "tcp" 2025-09-17 00:01:32.858569 | orchestrator | 00:01:32.856 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.858573 | orchestrator | 00:01:32.856 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 00:01:32.858576 | orchestrator | 00:01:32.856 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 00:01:32.858580 | orchestrator | 00:01:32.856 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 00:01:32.858584 | orchestrator | 00:01:32.856 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 00:01:32.858588 | orchestrator | 00:01:32.856 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.858591 | orchestrator | 00:01:32.856 STDOUT terraform:  } 2025-09-17 00:01:32.858595 | orchestrator | 00:01:32.856 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-17 00:01:32.858599 | orchestrator | 00:01:32.856 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-17 00:01:32.858607 | orchestrator | 00:01:32.856 STDOUT terraform:  + description = "wireguard" 2025-09-17 00:01:32.858611 | orchestrator | 00:01:32.856 STDOUT terraform:  + direction = "ingress" 2025-09-17 00:01:32.858614 | orchestrator | 00:01:32.856 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 00:01:32.858618 | orchestrator | 00:01:32.856 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.858622 | orchestrator | 00:01:32.856 STDOUT terraform:  + port_range_max = 51820 2025-09-17 00:01:32.858626 | orchestrator | 00:01:32.856 STDOUT terraform:  + port_range_min = 51820 2025-09-17 00:01:32.858630 | orchestrator | 00:01:32.856 STDOUT terraform:  + protocol = "udp" 2025-09-17 00:01:32.858633 | orchestrator | 00:01:32.856 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.858637 | orchestrator | 00:01:32.856 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 00:01:32.858641 | orchestrator | 00:01:32.856 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 00:01:32.858645 | orchestrator | 00:01:32.856 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 00:01:32.858651 | orchestrator | 00:01:32.856 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 00:01:32.858655 | orchestrator | 00:01:32.856 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.858659 | orchestrator | 00:01:32.856 STDOUT terraform:  } 2025-09-17 00:01:32.858663 | orchestrator | 00:01:32.857 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-17 00:01:32.858667 | orchestrator | 00:01:32.857 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-17 00:01:32.858670 | orchestrator | 00:01:32.857 STDOUT terraform:  + direction = "ingress" 2025-09-17 00:01:32.858674 | orchestrator | 00:01:32.857 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 00:01:32.858681 | orchestrator | 00:01:32.857 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.858685 | orchestrator | 00:01:32.857 STDOUT terraform:  + protocol = "tcp" 2025-09-17 00:01:32.858689 | orchestrator | 00:01:32.857 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.858693 | orchestrator | 00:01:32.857 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 00:01:32.858696 | orchestrator | 00:01:32.857 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 00:01:32.858700 | orchestrator | 00:01:32.857 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-17 00:01:32.858704 | orchestrator | 00:01:32.857 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 00:01:32.858708 | orchestrator | 00:01:32.857 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.858711 | orchestrator | 00:01:32.857 STDOUT terraform:  } 2025-09-17 00:01:32.858715 | orchestrator | 00:01:32.857 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-17 00:01:32.858719 | orchestrator | 00:01:32.857 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-17 00:01:32.858723 | orchestrator | 00:01:32.857 STDOUT terraform:  + direction = "ingress" 2025-09-17 00:01:32.858727 | orchestrator | 00:01:32.857 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 00:01:32.858730 | orchestrator | 00:01:32.857 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.858734 | orchestrator | 00:01:32.857 STDOUT terraform:  + protocol = "udp" 2025-09-17 00:01:32.858738 | orchestrator | 00:01:32.857 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.858742 | orchestrator | 00:01:32.857 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 00:01:32.858745 | orchestrator | 00:01:32.857 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 00:01:32.858752 | orchestrator | 00:01:32.857 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-17 00:01:32.858756 | orchestrator | 00:01:32.857 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 00:01:32.858760 | orchestrator | 00:01:32.857 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.858764 | orchestrator | 00:01:32.857 STDOUT terraform:  } 2025-09-17 00:01:32.858767 | orchestrator | 00:01:32.857 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-17 00:01:32.858771 | orchestrator | 00:01:32.857 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-17 00:01:32.858775 | orchestrator | 00:01:32.857 STDOUT terraform:  + direction = "ingress" 2025-09-17 00:01:32.858779 | orchestrator | 00:01:32.857 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 00:01:32.858782 | orchestrator | 00:01:32.857 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.858786 | orchestrator | 00:01:32.857 STDOUT terraform:  + protocol = "icmp" 2025-09-17 00:01:32.858796 | orchestrator | 00:01:32.857 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.858800 | orchestrator | 00:01:32.858 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 00:01:32.858804 | orchestrator | 00:01:32.858 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 00:01:32.858808 | orchestrator | 00:01:32.858 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 00:01:32.860415 | orchestrator | 00:01:32.858 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 00:01:32.860441 | orchestrator | 00:01:32.859 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.860446 | orchestrator | 00:01:32.859 STDOUT terraform:  } 2025-09-17 00:01:32.860450 | orchestrator | 00:01:32.859 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-17 00:01:32.860454 | orchestrator | 00:01:32.859 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-17 00:01:32.860458 | orchestrator | 00:01:32.859 STDOUT terraform:  + direction = "ingress" 2025-09-17 00:01:32.860462 | orchestrator | 00:01:32.859 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 00:01:32.860466 | orchestrator | 00:01:32.859 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.860470 | orchestrator | 00:01:32.859 STDOUT terraform:  + protocol = "tcp" 2025-09-17 00:01:32.860474 | orchestrator | 00:01:32.859 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.860478 | orchestrator | 00:01:32.859 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 00:01:32.860481 | orchestrator | 00:01:32.859 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 00:01:32.860485 | orchestrator | 00:01:32.859 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 00:01:32.860489 | orchestrator | 00:01:32.859 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 00:01:32.860493 | orchestrator | 00:01:32.859 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.860497 | orchestrator | 00:01:32.859 STDOUT terraform:  } 2025-09-17 00:01:32.860501 | orchestrator | 00:01:32.859 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-17 00:01:32.860504 | orchestrator | 00:01:32.859 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-17 00:01:32.860508 | orchestrator | 00:01:32.859 STDOUT terraform:  + direction = "ingress" 2025-09-17 00:01:32.860512 | orchestrator | 00:01:32.859 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 00:01:32.860516 | orchestrator | 00:01:32.859 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.860520 | orchestrator | 00:01:32.859 STDOUT terraform:  + protocol = "udp" 2025-09-17 00:01:32.860523 | orchestrator | 00:01:32.859 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.860527 | orchestrator | 00:01:32.859 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 00:01:32.860540 | orchestrator | 00:01:32.859 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 00:01:32.860544 | orchestrator | 00:01:32.859 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 00:01:32.860548 | orchestrator | 00:01:32.859 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 00:01:32.860551 | orchestrator | 00:01:32.860 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.860555 | orchestrator | 00:01:32.860 STDOUT terraform:  } 2025-09-17 00:01:32.860559 | orchestrator | 00:01:32.860 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-17 00:01:32.860563 | orchestrator | 00:01:32.860 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-17 00:01:32.860567 | orchestrator | 00:01:32.860 STDOUT terraform:  + direction = "ingress" 2025-09-17 00:01:32.860570 | orchestrator | 00:01:32.860 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 00:01:32.860574 | orchestrator | 00:01:32.860 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.860578 | orchestrator | 00:01:32.860 STDOUT terraform:  + protocol = "icmp" 2025-09-17 00:01:32.860590 | orchestrator | 00:01:32.860 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.860594 | orchestrator | 00:01:32.860 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 00:01:32.860598 | orchestrator | 00:01:32.860 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 00:01:32.860601 | orchestrator | 00:01:32.860 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 00:01:32.860605 | orchestrator | 00:01:32.860 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 00:01:32.860609 | orchestrator | 00:01:32.860 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.860613 | orchestrator | 00:01:32.860 STDOUT terraform:  } 2025-09-17 00:01:32.860616 | orchestrator | 00:01:32.860 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-17 00:01:32.860620 | orchestrator | 00:01:32.860 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-17 00:01:32.860624 | orchestrator | 00:01:32.860 STDOUT terraform:  + description = "vrrp" 2025-09-17 00:01:32.860630 | orchestrator | 00:01:32.860 STDOUT terraform:  + direction = "ingress" 2025-09-17 00:01:32.860634 | orchestrator | 00:01:32.860 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 00:01:32.860662 | orchestrator | 00:01:32.860 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.860711 | orchestrator | 00:01:32.860 STDOUT terraform:  + protocol = "112" 2025-09-17 00:01:32.860718 | orchestrator | 00:01:32.860 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.860753 | orchestrator | 00:01:32.860 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 00:01:32.860789 | orchestrator | 00:01:32.860 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 00:01:32.860821 | orchestrator | 00:01:32.860 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 00:01:32.860856 | orchestrator | 00:01:32.860 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 00:01:32.860894 | orchestrator | 00:01:32.860 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.860902 | orchestrator | 00:01:32.860 STDOUT terraform:  } 2025-09-17 00:01:32.860953 | orchestrator | 00:01:32.860 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-17 00:01:32.861005 | orchestrator | 00:01:32.860 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-17 00:01:32.861035 | orchestrator | 00:01:32.860 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.861069 | orchestrator | 00:01:32.861 STDOUT terraform:  + description = "management security group" 2025-09-17 00:01:32.861099 | orchestrator | 00:01:32.861 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.861127 | orchestrator | 00:01:32.861 STDOUT terraform:  + name = "testbed-management" 2025-09-17 00:01:32.861155 | orchestrator | 00:01:32.861 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.861225 | orchestrator | 00:01:32.861 STDOUT terraform:  + stateful = (known after apply) 2025-09-17 00:01:32.861234 | orchestrator | 00:01:32.861 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.861238 | orchestrator | 00:01:32.861 STDOUT terraform:  } 2025-09-17 00:01:32.861273 | orchestrator | 00:01:32.861 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-17 00:01:32.861325 | orchestrator | 00:01:32.861 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-17 00:01:32.861355 | orchestrator | 00:01:32.861 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.861382 | orchestrator | 00:01:32.861 STDOUT terraform:  + description = "node security group" 2025-09-17 00:01:32.861412 | orchestrator | 00:01:32.861 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.861437 | orchestrator | 00:01:32.861 STDOUT terraform:  + name = "testbed-node" 2025-09-17 00:01:32.861467 | orchestrator | 00:01:32.861 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.861495 | orchestrator | 00:01:32.861 STDOUT terraform:  + stateful = (known after apply) 2025-09-17 00:01:32.861524 | orchestrator | 00:01:32.861 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.861531 | orchestrator | 00:01:32.861 STDOUT terraform:  } 2025-09-17 00:01:32.861578 | orchestrator | 00:01:32.861 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-17 00:01:32.861622 | orchestrator | 00:01:32.861 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-17 00:01:32.861653 | orchestrator | 00:01:32.861 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 00:01:32.861683 | orchestrator | 00:01:32.861 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-17 00:01:32.861706 | orchestrator | 00:01:32.861 STDOUT terraform:  + dns_nameservers = [ 2025-09-17 00:01:32.861714 | orchestrator | 00:01:32.861 STDOUT terraform:  + "8.8.8.8", 2025-09-17 00:01:32.861725 | orchestrator | 00:01:32.861 STDOUT terraform:  + "9.9.9.9", 2025-09-17 00:01:32.861745 | orchestrator | 00:01:32.861 STDOUT terraform:  ] 2025-09-17 00:01:32.861765 | orchestrator | 00:01:32.861 STDOUT terraform:  + enable_dhcp = true 2025-09-17 00:01:32.861797 | orchestrator | 00:01:32.861 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-17 00:01:32.861828 | orchestrator | 00:01:32.861 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.861850 | orchestrator | 00:01:32.861 STDOUT terraform:  + ip_version = 4 2025-09-17 00:01:32.861880 | orchestrator | 00:01:32.861 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-17 00:01:32.861911 | orchestrator | 00:01:32.861 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-17 00:01:32.861947 | orchestrator | 00:01:32.861 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-17 00:01:32.861979 | orchestrator | 00:01:32.861 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 00:01:32.862000 | orchestrator | 00:01:32.861 STDOUT terraform:  + no_gateway = false 2025-09-17 00:01:32.862134 | orchestrator | 00:01:32.861 STDOUT terraform:  + region = (known after apply) 2025-09-17 00:01:32.864502 | orchestrator | 00:01:32.862 STDOUT terraform:  + service_types = (known after apply) 2025-09-17 00:01:32.865009 | orchestrator | 00:01:32.862 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 00:01:32.865090 | orchestrator | 00:01:32.862 STDOUT terraform:  + allocation_pool { 2025-09-17 00:01:32.865174 | orchestrator | 00:01:32.862 STDOUT terraform:  + end = "192.168.31.250" 2025-09-17 00:01:32.865288 | orchestrator | 00:01:32.862 STDOUT terraform:  + start = "192.168.31.200" 2025-09-17 00:01:32.865297 | orchestrator | 00:01:32.862 STDOUT terraform:  } 2025-09-17 00:01:32.865302 | orchestrator | 00:01:32.862 STDOUT terraform:  } 2025-09-17 00:01:32.865307 | orchestrator | 00:01:32.862 STDOUT terraform:  # terraform_data.image will be created 2025-09-17 00:01:32.865311 | orchestrator | 00:01:32.862 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-17 00:01:32.865315 | orchestrator | 00:01:32.862 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.865319 | orchestrator | 00:01:32.862 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-17 00:01:32.865323 | orchestrator | 00:01:32.862 STDOUT terraform:  + output = (known after apply) 2025-09-17 00:01:32.865327 | orchestrator | 00:01:32.862 STDOUT terraform:  } 2025-09-17 00:01:32.865338 | orchestrator | 00:01:32.862 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-17 00:01:32.865343 | orchestrator | 00:01:32.862 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-17 00:01:32.865346 | orchestrator | 00:01:32.862 STDOUT terraform:  + id = (known after apply) 2025-09-17 00:01:32.865350 | orchestrator | 00:01:32.862 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-17 00:01:32.865354 | orchestrator | 00:01:32.862 STDOUT terraform:  + output = (known after apply) 2025-09-17 00:01:32.865358 | orchestrator | 00:01:32.862 STDOUT terraform:  } 2025-09-17 00:01:32.865362 | orchestrator | 00:01:32.862 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-17 00:01:32.865377 | orchestrator | 00:01:32.862 STDOUT terraform: Changes to Outputs: 2025-09-17 00:01:32.865381 | orchestrator | 00:01:32.862 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-17 00:01:32.865384 | orchestrator | 00:01:32.862 STDOUT terraform:  + private_key = (sensitive value) 2025-09-17 00:01:32.929760 | orchestrator | 00:01:32.929 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-17 00:01:32.930228 | orchestrator | 00:01:32.930 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=3ad87621-aab7-473b-0348-8f9cfc8c4fcd] 2025-09-17 00:01:33.060391 | orchestrator | 00:01:33.060 STDOUT terraform: terraform_data.image: Creating... 2025-09-17 00:01:33.060463 | orchestrator | 00:01:33.060 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=f3dc1aa5-5965-6ebb-40f8-d0e2815ac4da] 2025-09-17 00:01:33.080113 | orchestrator | 00:01:33.079 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-17 00:01:33.080223 | orchestrator | 00:01:33.080 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-17 00:01:33.090349 | orchestrator | 00:01:33.090 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-17 00:01:33.100812 | orchestrator | 00:01:33.097 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-17 00:01:33.101984 | orchestrator | 00:01:33.101 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-17 00:01:33.105057 | orchestrator | 00:01:33.102 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-17 00:01:33.105442 | orchestrator | 00:01:33.105 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-17 00:01:33.105488 | orchestrator | 00:01:33.105 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-17 00:01:33.105546 | orchestrator | 00:01:33.105 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-17 00:01:33.131949 | orchestrator | 00:01:33.128 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-17 00:01:33.583588 | orchestrator | 00:01:33.583 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-17 00:01:33.585864 | orchestrator | 00:01:33.585 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-17 00:01:33.589705 | orchestrator | 00:01:33.589 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-17 00:01:33.594935 | orchestrator | 00:01:33.594 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-17 00:01:33.653490 | orchestrator | 00:01:33.653 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-17 00:01:33.657720 | orchestrator | 00:01:33.657 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-17 00:01:34.184311 | orchestrator | 00:01:34.183 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=33ebc85d-3806-4ae6-bb1b-41b89aaa445e] 2025-09-17 00:01:34.194519 | orchestrator | 00:01:34.194 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-17 00:01:36.818575 | orchestrator | 00:01:36.817 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=922621dd-972b-4e9a-bc9e-e1e44ba503f7] 2025-09-17 00:01:36.824659 | orchestrator | 00:01:36.824 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-17 00:01:36.824728 | orchestrator | 00:01:36.824 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=47b64ee5-5944-488f-91ba-80947343c2c4] 2025-09-17 00:01:36.830421 | orchestrator | 00:01:36.830 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-17 00:01:36.837414 | orchestrator | 00:01:36.837 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=03b82624-b2d4-4492-aa08-93320337b68f] 2025-09-17 00:01:36.844961 | orchestrator | 00:01:36.844 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-17 00:01:36.925310 | orchestrator | 00:01:36.924 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=69134018-d148-466a-9d44-263112a1226d] 2025-09-17 00:01:36.933046 | orchestrator | 00:01:36.932 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-17 00:01:36.941803 | orchestrator | 00:01:36.941 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=6d2e8bc3-4c44-4e8e-a645-39611fbfc66e] 2025-09-17 00:01:36.948010 | orchestrator | 00:01:36.947 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-17 00:01:37.012482 | orchestrator | 00:01:37.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=833e18f8-a2f7-4c8c-b617-8f83ac55bde9] 2025-09-17 00:01:37.026858 | orchestrator | 00:01:37.026 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-17 00:01:37.036417 | orchestrator | 00:01:37.036 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=6054240403a447329f1b4c5599c451330656c92e] 2025-09-17 00:01:37.053951 | orchestrator | 00:01:37.053 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-17 00:01:37.057601 | orchestrator | 00:01:37.057 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a] 2025-09-17 00:01:37.060948 | orchestrator | 00:01:37.060 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=db475cd5cda172e3669dd845dbf686315d3c84aa] 2025-09-17 00:01:37.064451 | orchestrator | 00:01:37.064 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-17 00:01:37.071062 | orchestrator | 00:01:37.070 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-17 00:01:37.095028 | orchestrator | 00:01:37.094 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=34b516b0-60cf-4ba1-b912-e488bac04690] 2025-09-17 00:01:37.127055 | orchestrator | 00:01:37.126 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=6f825aad-5321-4538-8ab0-212b689e74fb] 2025-09-17 00:01:37.550769 | orchestrator | 00:01:37.550 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=e64b4021-8d2d-4c49-b067-c44086593130] 2025-09-17 00:01:37.993388 | orchestrator | 00:01:37.992 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=bbffe9a5-bbfc-4a08-9371-914296fc32c1] 2025-09-17 00:01:38.008876 | orchestrator | 00:01:38.008 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-17 00:01:40.240087 | orchestrator | 00:01:40.234 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=a65335ad-556c-497a-b79b-8ac858b0e80d] 2025-09-17 00:01:40.246869 | orchestrator | 00:01:40.246 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=a7061de2-0566-4272-9d34-57a6f035e6cb] 2025-09-17 00:01:40.248597 | orchestrator | 00:01:40.248 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=d9d09841-f300-4329-a2ac-b45b236de72f] 2025-09-17 00:01:40.420880 | orchestrator | 00:01:40.419 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=14e35ba1-2869-4981-bf2a-53888936c571] 2025-09-17 00:01:40.514224 | orchestrator | 00:01:40.513 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=f2564849-a860-4c45-8229-f3f58755f2f5] 2025-09-17 00:01:40.523821 | orchestrator | 00:01:40.523 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=accea0c0-dd19-4395-8ed0-8cd720a4863e] 2025-09-17 00:01:41.066600 | orchestrator | 00:01:41.066 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=12d7ead4-4350-46be-b3f2-8144c132a5f6] 2025-09-17 00:01:41.074497 | orchestrator | 00:01:41.074 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-17 00:01:41.076032 | orchestrator | 00:01:41.075 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-17 00:01:41.076674 | orchestrator | 00:01:41.076 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-17 00:01:41.378433 | orchestrator | 00:01:41.377 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=57d318ee-aff6-4c32-87e0-df6ecede2dfc] 2025-09-17 00:01:41.386617 | orchestrator | 00:01:41.386 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-17 00:01:41.386676 | orchestrator | 00:01:41.386 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-17 00:01:41.386750 | orchestrator | 00:01:41.386 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-17 00:01:41.387579 | orchestrator | 00:01:41.387 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-17 00:01:41.391083 | orchestrator | 00:01:41.390 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-17 00:01:41.395796 | orchestrator | 00:01:41.395 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-17 00:01:41.441163 | orchestrator | 00:01:41.440 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=788ef6eb-ffaf-44b8-aa59-61d548247c51] 2025-09-17 00:01:41.451918 | orchestrator | 00:01:41.451 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-17 00:01:41.452290 | orchestrator | 00:01:41.452 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-17 00:01:41.453364 | orchestrator | 00:01:41.453 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-17 00:01:41.604092 | orchestrator | 00:01:41.603 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=ecf1ae26-5ed5-4bc9-8227-4a9eb31c7360] 2025-09-17 00:01:41.611065 | orchestrator | 00:01:41.610 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-17 00:01:41.635277 | orchestrator | 00:01:41.635 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=6e93beb7-0fed-4f92-8c76-65dc82678657] 2025-09-17 00:01:41.649357 | orchestrator | 00:01:41.649 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-17 00:01:41.830315 | orchestrator | 00:01:41.829 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=98b4512f-7cfd-4008-82cb-f13d0b0decbe] 2025-09-17 00:01:41.844004 | orchestrator | 00:01:41.843 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-17 00:01:41.962566 | orchestrator | 00:01:41.962 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=6cd5af66-cc2a-49b1-8174-01c8e0c16804] 2025-09-17 00:01:41.978656 | orchestrator | 00:01:41.978 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-17 00:01:42.167704 | orchestrator | 00:01:42.167 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=7c6cf792-05d5-4b47-ab10-9aedd1894ebe] 2025-09-17 00:01:42.186732 | orchestrator | 00:01:42.186 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-17 00:01:42.280105 | orchestrator | 00:01:42.279 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=a9ec493e-d84b-4461-b85e-916ff66740c3] 2025-09-17 00:01:42.291536 | orchestrator | 00:01:42.291 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-17 00:01:42.328299 | orchestrator | 00:01:42.326 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=56d2dc85-87c5-4341-80bc-8da0aaef8ece] 2025-09-17 00:01:42.336376 | orchestrator | 00:01:42.336 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-17 00:01:42.383599 | orchestrator | 00:01:42.383 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=443717a1-0c21-43f4-8cc6-ec3566446c08] 2025-09-17 00:01:42.442481 | orchestrator | 00:01:42.442 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=3255d1e2-e4e5-44e0-9651-cab2c2d8f3a2] 2025-09-17 00:01:42.506744 | orchestrator | 00:01:42.506 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=5c8d3de4-ef35-4c0a-b5e4-be010312b3a1] 2025-09-17 00:01:42.517153 | orchestrator | 00:01:42.516 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=86e31e47-23cc-48b6-946b-d8a2d3709a77] 2025-09-17 00:01:42.906915 | orchestrator | 00:01:42.905 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=14638f58-0b3c-417f-a69d-555e47566135] 2025-09-17 00:01:43.026598 | orchestrator | 00:01:43.026 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=e7e7fa80-f40b-41ce-9532-4aaf3685ec7a] 2025-09-17 00:01:43.111945 | orchestrator | 00:01:43.111 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=53b9d046-f9f0-4c5a-b0c7-e4c57a890f40] 2025-09-17 00:01:43.231476 | orchestrator | 00:01:43.231 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=82437d7b-7008-479f-8c2b-04c2bc52d2ec] 2025-09-17 00:01:43.263655 | orchestrator | 00:01:43.263 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=619473f7-b947-4491-b724-181fb0445290] 2025-09-17 00:01:43.665562 | orchestrator | 00:01:43.665 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=b482aa97-0d48-4add-b5e9-4f3dbd36debc] 2025-09-17 00:01:43.673575 | orchestrator | 00:01:43.673 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-17 00:01:43.691558 | orchestrator | 00:01:43.691 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-17 00:01:43.700332 | orchestrator | 00:01:43.700 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-17 00:01:43.700365 | orchestrator | 00:01:43.700 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-17 00:01:43.718727 | orchestrator | 00:01:43.718 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-17 00:01:43.718835 | orchestrator | 00:01:43.718 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-17 00:01:43.725477 | orchestrator | 00:01:43.725 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-17 00:01:45.722718 | orchestrator | 00:01:45.722 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=d491f10d-c838-4d19-8e74-85d3b88b37e4] 2025-09-17 00:01:45.731846 | orchestrator | 00:01:45.731 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-17 00:01:45.738569 | orchestrator | 00:01:45.738 STDOUT terraform: local_file.inventory: Creating... 2025-09-17 00:01:45.745978 | orchestrator | 00:01:45.745 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-17 00:01:45.746177 | orchestrator | 00:01:45.746 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=74fb6e0b6bbcb65dc4faf7b2ef747435101c3327] 2025-09-17 00:01:45.750222 | orchestrator | 00:01:45.750 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=b56fb3ef7631263abb06f398db36c39bd55aa328] 2025-09-17 00:01:46.626946 | orchestrator | 00:01:46.626 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=d491f10d-c838-4d19-8e74-85d3b88b37e4] 2025-09-17 00:01:53.694352 | orchestrator | 00:01:53.693 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-17 00:01:53.702474 | orchestrator | 00:01:53.702 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-17 00:01:53.704773 | orchestrator | 00:01:53.704 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-17 00:01:53.719081 | orchestrator | 00:01:53.718 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-17 00:01:53.719162 | orchestrator | 00:01:53.719 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-17 00:01:53.728353 | orchestrator | 00:01:53.728 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-17 00:02:03.696081 | orchestrator | 00:02:03.695 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-17 00:02:03.703140 | orchestrator | 00:02:03.702 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-17 00:02:03.705404 | orchestrator | 00:02:03.705 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-17 00:02:03.719687 | orchestrator | 00:02:03.719 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-17 00:02:03.720743 | orchestrator | 00:02:03.719 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-17 00:02:03.728956 | orchestrator | 00:02:03.728 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-17 00:02:04.486129 | orchestrator | 00:02:04.485 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=8ac26f77-e680-4a49-ab27-6eafb08d302e] 2025-09-17 00:02:13.703991 | orchestrator | 00:02:13.703 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-17 00:02:13.706215 | orchestrator | 00:02:13.705 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-17 00:02:13.720411 | orchestrator | 00:02:13.720 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-17 00:02:13.720529 | orchestrator | 00:02:13.720 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-17 00:02:13.729623 | orchestrator | 00:02:13.729 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-17 00:02:14.593569 | orchestrator | 00:02:14.593 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=fd1fddb2-d1a7-4a18-afad-05b3dcab322a] 2025-09-17 00:02:14.713232 | orchestrator | 00:02:14.712 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=b4d0a539-08d2-4771-8dab-aa3eb104e662] 2025-09-17 00:02:14.859133 | orchestrator | 00:02:14.858 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=cfcde7e1-a960-472a-898a-d0d6c0e5f1df] 2025-09-17 00:02:15.175549 | orchestrator | 00:02:15.175 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=b3c7b41f-2d70-4c24-a6a3-2aed13a78fe2] 2025-09-17 00:02:23.723258 | orchestrator | 00:02:23.722 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-09-17 00:02:24.603812 | orchestrator | 00:02:24.603 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=e8682be1-853c-445d-9e66-0a0ff75ac20a] 2025-09-17 00:02:24.739682 | orchestrator | 00:02:24.738 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-17 00:02:24.748145 | orchestrator | 00:02:24.747 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-17 00:02:24.754393 | orchestrator | 00:02:24.754 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-17 00:02:24.755769 | orchestrator | 00:02:24.755 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-17 00:02:24.761846 | orchestrator | 00:02:24.761 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-17 00:02:24.764260 | orchestrator | 00:02:24.764 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-17 00:02:24.766259 | orchestrator | 00:02:24.766 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-17 00:02:24.766614 | orchestrator | 00:02:24.766 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-17 00:02:24.768576 | orchestrator | 00:02:24.768 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-17 00:02:24.775458 | orchestrator | 00:02:24.775 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5177296656051015151] 2025-09-17 00:02:24.782773 | orchestrator | 00:02:24.782 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-17 00:02:24.804608 | orchestrator | 00:02:24.804 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-17 00:02:28.137667 | orchestrator | 00:02:28.137 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=e8682be1-853c-445d-9e66-0a0ff75ac20a/922621dd-972b-4e9a-bc9e-e1e44ba503f7] 2025-09-17 00:02:28.145362 | orchestrator | 00:02:28.144 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=cfcde7e1-a960-472a-898a-d0d6c0e5f1df/34b516b0-60cf-4ba1-b912-e488bac04690] 2025-09-17 00:02:28.180547 | orchestrator | 00:02:28.180 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=8ac26f77-e680-4a49-ab27-6eafb08d302e/23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a] 2025-09-17 00:02:28.193557 | orchestrator | 00:02:28.193 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=e8682be1-853c-445d-9e66-0a0ff75ac20a/6d2e8bc3-4c44-4e8e-a645-39611fbfc66e] 2025-09-17 00:02:28.205561 | orchestrator | 00:02:28.205 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=cfcde7e1-a960-472a-898a-d0d6c0e5f1df/69134018-d148-466a-9d44-263112a1226d] 2025-09-17 00:02:28.500312 | orchestrator | 00:02:28.499 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=8ac26f77-e680-4a49-ab27-6eafb08d302e/6f825aad-5321-4538-8ab0-212b689e74fb] 2025-09-17 00:02:34.299222 | orchestrator | 00:02:34.298 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=cfcde7e1-a960-472a-898a-d0d6c0e5f1df/47b64ee5-5944-488f-91ba-80947343c2c4] 2025-09-17 00:02:34.315220 | orchestrator | 00:02:34.314 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=e8682be1-853c-445d-9e66-0a0ff75ac20a/833e18f8-a2f7-4c8c-b617-8f83ac55bde9] 2025-09-17 00:02:34.351623 | orchestrator | 00:02:34.351 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=8ac26f77-e680-4a49-ab27-6eafb08d302e/03b82624-b2d4-4492-aa08-93320337b68f] 2025-09-17 00:02:34.806113 | orchestrator | 00:02:34.805 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-17 00:02:44.806320 | orchestrator | 00:02:44.806 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-17 00:02:45.327740 | orchestrator | 00:02:45.327 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=8ee22472-ee16-4dd5-b66d-a288f1164615] 2025-09-17 00:02:45.354971 | orchestrator | 00:02:45.354 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-17 00:02:45.355080 | orchestrator | 00:02:45.354 STDOUT terraform: Outputs: 2025-09-17 00:02:45.355096 | orchestrator | 00:02:45.354 STDOUT terraform: manager_address = 2025-09-17 00:02:45.355108 | orchestrator | 00:02:45.354 STDOUT terraform: private_key = 2025-09-17 00:02:45.562777 | orchestrator | ok: Runtime: 0:01:17.991221 2025-09-17 00:02:45.600632 | 2025-09-17 00:02:45.600756 | TASK [Create infrastructure (stable)] 2025-09-17 00:02:46.135725 | orchestrator | skipping: Conditional result was False 2025-09-17 00:02:46.155226 | 2025-09-17 00:02:46.155377 | TASK [Fetch manager address] 2025-09-17 00:02:46.572287 | orchestrator | ok 2025-09-17 00:02:46.582774 | 2025-09-17 00:02:46.582929 | TASK [Set manager_host address] 2025-09-17 00:02:46.669232 | orchestrator | ok 2025-09-17 00:02:46.682745 | 2025-09-17 00:02:46.682905 | LOOP [Update ansible collections] 2025-09-17 00:02:47.418683 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-17 00:02:47.419134 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-17 00:02:47.419192 | orchestrator | Starting galaxy collection install process 2025-09-17 00:02:47.419228 | orchestrator | Process install dependency map 2025-09-17 00:02:47.419259 | orchestrator | Starting collection install process 2025-09-17 00:02:47.419288 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-09-17 00:02:47.419344 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-09-17 00:02:47.419407 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-17 00:02:47.419511 | orchestrator | ok: Item: commons Runtime: 0:00:00.445298 2025-09-17 00:02:48.110299 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-17 00:02:48.110423 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-17 00:02:48.110454 | orchestrator | Starting galaxy collection install process 2025-09-17 00:02:48.110478 | orchestrator | Process install dependency map 2025-09-17 00:02:48.110499 | orchestrator | Starting collection install process 2025-09-17 00:02:48.110519 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-09-17 00:02:48.110560 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-09-17 00:02:48.110583 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-17 00:02:48.110616 | orchestrator | ok: Item: services Runtime: 0:00:00.487067 2025-09-17 00:02:48.133008 | 2025-09-17 00:02:48.133172 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-17 00:02:58.657964 | orchestrator | ok 2025-09-17 00:02:58.669044 | 2025-09-17 00:02:58.669173 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-17 00:03:58.717475 | orchestrator | ok 2025-09-17 00:03:58.733133 | 2025-09-17 00:03:58.733350 | TASK [Fetch manager ssh hostkey] 2025-09-17 00:04:00.316048 | orchestrator | Output suppressed because no_log was given 2025-09-17 00:04:00.332060 | 2025-09-17 00:04:00.332221 | TASK [Get ssh keypair from terraform environment] 2025-09-17 00:04:00.869012 | orchestrator | ok: Runtime: 0:00:00.009287 2025-09-17 00:04:00.885809 | 2025-09-17 00:04:00.885969 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-17 00:04:00.934326 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-17 00:04:00.944072 | 2025-09-17 00:04:00.944186 | TASK [Run manager part 0] 2025-09-17 00:04:02.062998 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-17 00:04:02.104735 | orchestrator | 2025-09-17 00:04:02.104807 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-17 00:04:02.104825 | orchestrator | 2025-09-17 00:04:02.104854 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-17 00:04:03.960282 | orchestrator | ok: [testbed-manager] 2025-09-17 00:04:03.960342 | orchestrator | 2025-09-17 00:04:03.960370 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-17 00:04:03.960384 | orchestrator | 2025-09-17 00:04:03.960396 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 00:04:05.875593 | orchestrator | ok: [testbed-manager] 2025-09-17 00:04:05.875692 | orchestrator | 2025-09-17 00:04:05.875710 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-17 00:04:06.564403 | orchestrator | ok: [testbed-manager] 2025-09-17 00:04:06.564451 | orchestrator | 2025-09-17 00:04:06.564460 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-17 00:04:06.615472 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:04:06.615518 | orchestrator | 2025-09-17 00:04:06.615529 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-17 00:04:06.644113 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:04:06.644144 | orchestrator | 2025-09-17 00:04:06.644150 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-17 00:04:06.676198 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:04:06.676228 | orchestrator | 2025-09-17 00:04:06.676236 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-17 00:04:06.704468 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:04:06.704498 | orchestrator | 2025-09-17 00:04:06.704505 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-17 00:04:06.732373 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:04:06.732419 | orchestrator | 2025-09-17 00:04:06.732426 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-17 00:04:06.758686 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:04:06.758732 | orchestrator | 2025-09-17 00:04:06.758740 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-17 00:04:06.784343 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:04:06.784393 | orchestrator | 2025-09-17 00:04:06.784401 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-17 00:04:07.519846 | orchestrator | changed: [testbed-manager] 2025-09-17 00:04:07.519919 | orchestrator | 2025-09-17 00:04:07.519936 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-17 00:06:48.877263 | orchestrator | changed: [testbed-manager] 2025-09-17 00:06:48.877342 | orchestrator | 2025-09-17 00:06:48.877359 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-17 00:08:15.393068 | orchestrator | changed: [testbed-manager] 2025-09-17 00:08:15.393176 | orchestrator | 2025-09-17 00:08:15.393194 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-17 00:08:37.920733 | orchestrator | changed: [testbed-manager] 2025-09-17 00:08:37.920814 | orchestrator | 2025-09-17 00:08:37.920835 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-17 00:08:46.402463 | orchestrator | changed: [testbed-manager] 2025-09-17 00:08:46.402558 | orchestrator | 2025-09-17 00:08:46.402574 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-17 00:08:46.451693 | orchestrator | ok: [testbed-manager] 2025-09-17 00:08:46.451747 | orchestrator | 2025-09-17 00:08:46.451757 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-17 00:08:47.208015 | orchestrator | ok: [testbed-manager] 2025-09-17 00:08:47.208067 | orchestrator | 2025-09-17 00:08:47.208078 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-17 00:08:47.917697 | orchestrator | changed: [testbed-manager] 2025-09-17 00:08:47.917752 | orchestrator | 2025-09-17 00:08:47.917765 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-17 00:08:54.287352 | orchestrator | changed: [testbed-manager] 2025-09-17 00:08:54.287412 | orchestrator | 2025-09-17 00:08:54.287442 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-17 00:09:00.038537 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:00.038597 | orchestrator | 2025-09-17 00:09:00.038608 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-17 00:09:02.660359 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:02.660470 | orchestrator | 2025-09-17 00:09:02.660486 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-17 00:09:04.376945 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:04.377051 | orchestrator | 2025-09-17 00:09:04.377067 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-17 00:09:05.478897 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-17 00:09:05.478949 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-17 00:09:05.478956 | orchestrator | 2025-09-17 00:09:05.478963 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-17 00:09:05.521561 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-17 00:09:05.521627 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-17 00:09:05.521643 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-17 00:09:05.521657 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-17 00:09:08.711869 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-17 00:09:08.711988 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-17 00:09:08.712003 | orchestrator | 2025-09-17 00:09:08.712017 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-17 00:09:09.272547 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:09.272655 | orchestrator | 2025-09-17 00:09:09.272671 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-17 00:09:28.894262 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-17 00:09:28.894389 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-17 00:09:28.894407 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-17 00:09:28.894421 | orchestrator | 2025-09-17 00:09:28.894435 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-17 00:09:31.218260 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-17 00:09:31.218359 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-17 00:09:31.218369 | orchestrator | 2025-09-17 00:09:31.218378 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-17 00:09:31.218387 | orchestrator | 2025-09-17 00:09:31.218396 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 00:09:32.585505 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:32.585610 | orchestrator | 2025-09-17 00:09:32.585631 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-17 00:09:32.632011 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:32.632085 | orchestrator | 2025-09-17 00:09:32.632102 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-17 00:09:32.691471 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:32.691548 | orchestrator | 2025-09-17 00:09:32.691563 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-17 00:09:33.419180 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:33.419234 | orchestrator | 2025-09-17 00:09:33.419243 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-17 00:09:34.136614 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:34.136720 | orchestrator | 2025-09-17 00:09:34.136736 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-17 00:09:36.000749 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-17 00:09:36.000795 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-17 00:09:36.000803 | orchestrator | 2025-09-17 00:09:36.000817 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-17 00:09:37.282356 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:37.282520 | orchestrator | 2025-09-17 00:09:37.282539 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-17 00:09:38.998279 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 00:09:38.998478 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-17 00:09:38.998495 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-17 00:09:38.998506 | orchestrator | 2025-09-17 00:09:38.998518 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-17 00:09:39.057386 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:09:39.057478 | orchestrator | 2025-09-17 00:09:39.057496 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-17 00:09:39.618710 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:39.618756 | orchestrator | 2025-09-17 00:09:39.618766 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-17 00:09:39.687209 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:09:39.687251 | orchestrator | 2025-09-17 00:09:39.687261 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-17 00:09:40.527160 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 00:09:40.527213 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:40.527224 | orchestrator | 2025-09-17 00:09:40.527231 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-17 00:09:40.564789 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:09:40.564828 | orchestrator | 2025-09-17 00:09:40.564837 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-17 00:09:40.596238 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:09:40.596273 | orchestrator | 2025-09-17 00:09:40.596281 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-17 00:09:40.625790 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:09:40.625819 | orchestrator | 2025-09-17 00:09:40.625826 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-17 00:09:40.671328 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:09:40.671365 | orchestrator | 2025-09-17 00:09:40.671376 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-17 00:09:41.355113 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:41.355200 | orchestrator | 2025-09-17 00:09:41.355215 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-17 00:09:41.355228 | orchestrator | 2025-09-17 00:09:41.355240 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 00:09:42.713678 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:42.713717 | orchestrator | 2025-09-17 00:09:42.713724 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-17 00:09:43.652697 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:43.652762 | orchestrator | 2025-09-17 00:09:43.652778 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:09:43.652791 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-17 00:09:43.652803 | orchestrator | 2025-09-17 00:09:44.194472 | orchestrator | ok: Runtime: 0:05:42.559766 2025-09-17 00:09:44.212138 | 2025-09-17 00:09:44.212263 | TASK [Point out that the log in on the manager is now possible] 2025-09-17 00:09:44.259497 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-17 00:09:44.268643 | 2025-09-17 00:09:44.268735 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-17 00:09:44.300017 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-17 00:09:44.309516 | 2025-09-17 00:09:44.309632 | TASK [Run manager part 1 + 2] 2025-09-17 00:09:45.096364 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-17 00:09:45.147965 | orchestrator | 2025-09-17 00:09:45.148050 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-17 00:09:45.148067 | orchestrator | 2025-09-17 00:09:45.148097 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 00:09:47.672886 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:47.672975 | orchestrator | 2025-09-17 00:09:47.673048 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-17 00:09:47.710592 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:09:47.710651 | orchestrator | 2025-09-17 00:09:47.710671 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-17 00:09:47.746724 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:47.746785 | orchestrator | 2025-09-17 00:09:47.746802 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-17 00:09:47.790710 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:47.790761 | orchestrator | 2025-09-17 00:09:47.790775 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-17 00:09:47.849792 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:47.849852 | orchestrator | 2025-09-17 00:09:47.849872 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-17 00:09:47.903691 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:47.903754 | orchestrator | 2025-09-17 00:09:47.903773 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-17 00:09:47.941794 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-17 00:09:47.941845 | orchestrator | 2025-09-17 00:09:47.941859 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-17 00:09:48.620652 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:48.620704 | orchestrator | 2025-09-17 00:09:48.620722 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-17 00:09:48.668995 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:09:48.669033 | orchestrator | 2025-09-17 00:09:48.669042 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-17 00:09:50.000559 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:50.000620 | orchestrator | 2025-09-17 00:09:50.000661 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-17 00:09:50.557244 | orchestrator | ok: [testbed-manager] 2025-09-17 00:09:50.557283 | orchestrator | 2025-09-17 00:09:50.557293 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-17 00:09:51.836894 | orchestrator | changed: [testbed-manager] 2025-09-17 00:09:51.836947 | orchestrator | 2025-09-17 00:09:51.836963 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-17 00:10:07.235052 | orchestrator | changed: [testbed-manager] 2025-09-17 00:10:07.235109 | orchestrator | 2025-09-17 00:10:07.235124 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-17 00:10:07.797169 | orchestrator | ok: [testbed-manager] 2025-09-17 00:10:07.797246 | orchestrator | 2025-09-17 00:10:07.797262 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-17 00:10:07.848066 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:10:07.848105 | orchestrator | 2025-09-17 00:10:07.848117 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-17 00:10:08.719104 | orchestrator | changed: [testbed-manager] 2025-09-17 00:10:08.719140 | orchestrator | 2025-09-17 00:10:08.719149 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-17 00:10:09.596278 | orchestrator | changed: [testbed-manager] 2025-09-17 00:10:09.596337 | orchestrator | 2025-09-17 00:10:09.596354 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-17 00:10:10.173109 | orchestrator | changed: [testbed-manager] 2025-09-17 00:10:10.173172 | orchestrator | 2025-09-17 00:10:10.173185 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-17 00:10:10.207771 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-17 00:10:10.207827 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-17 00:10:10.207833 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-17 00:10:10.207838 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-17 00:10:12.531927 | orchestrator | changed: [testbed-manager] 2025-09-17 00:10:12.532024 | orchestrator | 2025-09-17 00:10:12.532037 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-17 00:10:21.552072 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-17 00:10:21.552167 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-17 00:10:21.552183 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-17 00:10:21.552195 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-17 00:10:21.552236 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-17 00:10:21.552248 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-17 00:10:21.552260 | orchestrator | 2025-09-17 00:10:21.552272 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-17 00:10:22.581430 | orchestrator | changed: [testbed-manager] 2025-09-17 00:10:22.581519 | orchestrator | 2025-09-17 00:10:22.581537 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-17 00:10:22.623293 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:10:22.623354 | orchestrator | 2025-09-17 00:10:22.623368 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-17 00:10:25.753067 | orchestrator | changed: [testbed-manager] 2025-09-17 00:10:25.753169 | orchestrator | 2025-09-17 00:10:25.753185 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-17 00:10:25.791013 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:10:25.791066 | orchestrator | 2025-09-17 00:10:25.791075 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-17 00:12:05.282239 | orchestrator | changed: [testbed-manager] 2025-09-17 00:12:05.282339 | orchestrator | 2025-09-17 00:12:05.282356 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-17 00:12:06.400070 | orchestrator | ok: [testbed-manager] 2025-09-17 00:12:06.400156 | orchestrator | 2025-09-17 00:12:06.400172 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:12:06.400186 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-17 00:12:06.400198 | orchestrator | 2025-09-17 00:12:06.926229 | orchestrator | ok: Runtime: 0:02:21.915996 2025-09-17 00:12:06.943397 | 2025-09-17 00:12:06.943616 | TASK [Reboot manager] 2025-09-17 00:12:08.479832 | orchestrator | ok: Runtime: 0:00:00.990069 2025-09-17 00:12:08.496980 | 2025-09-17 00:12:08.497126 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-17 00:12:22.781124 | orchestrator | ok 2025-09-17 00:12:22.791858 | 2025-09-17 00:12:22.791980 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-17 00:13:22.837440 | orchestrator | ok 2025-09-17 00:13:22.847678 | 2025-09-17 00:13:22.847827 | TASK [Deploy manager + bootstrap nodes] 2025-09-17 00:13:25.242101 | orchestrator | 2025-09-17 00:13:25.242338 | orchestrator | # DEPLOY MANAGER 2025-09-17 00:13:25.242365 | orchestrator | 2025-09-17 00:13:25.242380 | orchestrator | + set -e 2025-09-17 00:13:25.242394 | orchestrator | + echo 2025-09-17 00:13:25.242408 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-17 00:13:25.242426 | orchestrator | + echo 2025-09-17 00:13:25.242503 | orchestrator | + cat /opt/manager-vars.sh 2025-09-17 00:13:25.245587 | orchestrator | export NUMBER_OF_NODES=6 2025-09-17 00:13:25.245617 | orchestrator | 2025-09-17 00:13:25.245629 | orchestrator | export CEPH_VERSION=reef 2025-09-17 00:13:25.245642 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-17 00:13:25.245655 | orchestrator | export MANAGER_VERSION=latest 2025-09-17 00:13:25.245678 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-17 00:13:25.245689 | orchestrator | 2025-09-17 00:13:25.245707 | orchestrator | export ARA=false 2025-09-17 00:13:25.245719 | orchestrator | export DEPLOY_MODE=manager 2025-09-17 00:13:25.245736 | orchestrator | export TEMPEST=true 2025-09-17 00:13:25.245748 | orchestrator | export IS_ZUUL=true 2025-09-17 00:13:25.245758 | orchestrator | 2025-09-17 00:13:25.245776 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-09-17 00:13:25.245788 | orchestrator | export EXTERNAL_API=false 2025-09-17 00:13:25.245799 | orchestrator | 2025-09-17 00:13:25.245810 | orchestrator | export IMAGE_USER=ubuntu 2025-09-17 00:13:25.245824 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-17 00:13:25.245835 | orchestrator | 2025-09-17 00:13:25.245846 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-17 00:13:25.245912 | orchestrator | 2025-09-17 00:13:25.245925 | orchestrator | + echo 2025-09-17 00:13:25.245938 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-17 00:13:25.246775 | orchestrator | ++ export INTERACTIVE=false 2025-09-17 00:13:25.246800 | orchestrator | ++ INTERACTIVE=false 2025-09-17 00:13:25.246818 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-17 00:13:25.246832 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-17 00:13:25.247007 | orchestrator | + source /opt/manager-vars.sh 2025-09-17 00:13:25.247024 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-17 00:13:25.247037 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-17 00:13:25.247052 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-17 00:13:25.247063 | orchestrator | ++ CEPH_VERSION=reef 2025-09-17 00:13:25.247074 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-17 00:13:25.247085 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-17 00:13:25.247095 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-17 00:13:25.247106 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-17 00:13:25.247122 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-17 00:13:25.247142 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-17 00:13:25.247156 | orchestrator | ++ export ARA=false 2025-09-17 00:13:25.247168 | orchestrator | ++ ARA=false 2025-09-17 00:13:25.247178 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-17 00:13:25.247189 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-17 00:13:25.247199 | orchestrator | ++ export TEMPEST=true 2025-09-17 00:13:25.247210 | orchestrator | ++ TEMPEST=true 2025-09-17 00:13:25.247220 | orchestrator | ++ export IS_ZUUL=true 2025-09-17 00:13:25.247231 | orchestrator | ++ IS_ZUUL=true 2025-09-17 00:13:25.247245 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-09-17 00:13:25.247262 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-09-17 00:13:25.247272 | orchestrator | ++ export EXTERNAL_API=false 2025-09-17 00:13:25.247283 | orchestrator | ++ EXTERNAL_API=false 2025-09-17 00:13:25.247293 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-17 00:13:25.247304 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-17 00:13:25.247314 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-17 00:13:25.247325 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-17 00:13:25.247341 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-17 00:13:25.247369 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-17 00:13:25.247393 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-17 00:13:25.303387 | orchestrator | + docker version 2025-09-17 00:13:25.557744 | orchestrator | Client: Docker Engine - Community 2025-09-17 00:13:25.557848 | orchestrator | Version: 27.5.1 2025-09-17 00:13:25.557863 | orchestrator | API version: 1.47 2025-09-17 00:13:25.557877 | orchestrator | Go version: go1.22.11 2025-09-17 00:13:25.557888 | orchestrator | Git commit: 9f9e405 2025-09-17 00:13:25.557900 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-17 00:13:25.557912 | orchestrator | OS/Arch: linux/amd64 2025-09-17 00:13:25.557923 | orchestrator | Context: default 2025-09-17 00:13:25.557934 | orchestrator | 2025-09-17 00:13:25.557945 | orchestrator | Server: Docker Engine - Community 2025-09-17 00:13:25.557956 | orchestrator | Engine: 2025-09-17 00:13:25.557967 | orchestrator | Version: 27.5.1 2025-09-17 00:13:25.557979 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-17 00:13:25.558059 | orchestrator | Go version: go1.22.11 2025-09-17 00:13:25.558073 | orchestrator | Git commit: 4c9b3b0 2025-09-17 00:13:25.558084 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-17 00:13:25.558095 | orchestrator | OS/Arch: linux/amd64 2025-09-17 00:13:25.558106 | orchestrator | Experimental: false 2025-09-17 00:13:25.558117 | orchestrator | containerd: 2025-09-17 00:13:25.558128 | orchestrator | Version: 1.7.27 2025-09-17 00:13:25.558139 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-17 00:13:25.558150 | orchestrator | runc: 2025-09-17 00:13:25.558161 | orchestrator | Version: 1.2.5 2025-09-17 00:13:25.558172 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-17 00:13:25.558183 | orchestrator | docker-init: 2025-09-17 00:13:25.558193 | orchestrator | Version: 0.19.0 2025-09-17 00:13:25.558205 | orchestrator | GitCommit: de40ad0 2025-09-17 00:13:25.560460 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-17 00:13:25.569104 | orchestrator | + set -e 2025-09-17 00:13:25.569152 | orchestrator | + source /opt/manager-vars.sh 2025-09-17 00:13:25.569169 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-17 00:13:25.569184 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-17 00:13:25.569196 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-17 00:13:25.569208 | orchestrator | ++ CEPH_VERSION=reef 2025-09-17 00:13:25.569219 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-17 00:13:25.569231 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-17 00:13:25.569242 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-17 00:13:25.569253 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-17 00:13:25.569264 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-17 00:13:25.569275 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-17 00:13:25.569285 | orchestrator | ++ export ARA=false 2025-09-17 00:13:25.569305 | orchestrator | ++ ARA=false 2025-09-17 00:13:25.569316 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-17 00:13:25.569327 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-17 00:13:25.569344 | orchestrator | ++ export TEMPEST=true 2025-09-17 00:13:25.569355 | orchestrator | ++ TEMPEST=true 2025-09-17 00:13:25.569366 | orchestrator | ++ export IS_ZUUL=true 2025-09-17 00:13:25.569376 | orchestrator | ++ IS_ZUUL=true 2025-09-17 00:13:25.569387 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-09-17 00:13:25.569398 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-09-17 00:13:25.569408 | orchestrator | ++ export EXTERNAL_API=false 2025-09-17 00:13:25.569419 | orchestrator | ++ EXTERNAL_API=false 2025-09-17 00:13:25.569429 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-17 00:13:25.569440 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-17 00:13:25.569450 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-17 00:13:25.569461 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-17 00:13:25.569525 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-17 00:13:25.569536 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-17 00:13:25.569547 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-17 00:13:25.569558 | orchestrator | ++ export INTERACTIVE=false 2025-09-17 00:13:25.569568 | orchestrator | ++ INTERACTIVE=false 2025-09-17 00:13:25.569579 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-17 00:13:25.569594 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-17 00:13:25.569613 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-17 00:13:25.569624 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-17 00:13:25.569635 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-17 00:13:25.576340 | orchestrator | + set -e 2025-09-17 00:13:25.576440 | orchestrator | + VERSION=reef 2025-09-17 00:13:25.577220 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-17 00:13:25.582726 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-17 00:13:25.582766 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-17 00:13:25.587853 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-17 00:13:25.593704 | orchestrator | + set -e 2025-09-17 00:13:25.593760 | orchestrator | + VERSION=2024.2 2025-09-17 00:13:25.594528 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-17 00:13:25.596359 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-17 00:13:25.596383 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-17 00:13:25.601831 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-17 00:13:25.602767 | orchestrator | ++ semver latest 7.0.0 2025-09-17 00:13:25.663859 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-17 00:13:25.663903 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-17 00:13:25.663915 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-17 00:13:25.663927 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-17 00:13:25.752710 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-17 00:13:25.757315 | orchestrator | + source /opt/venv/bin/activate 2025-09-17 00:13:25.758910 | orchestrator | ++ deactivate nondestructive 2025-09-17 00:13:25.758934 | orchestrator | ++ '[' -n '' ']' 2025-09-17 00:13:25.759131 | orchestrator | ++ '[' -n '' ']' 2025-09-17 00:13:25.759147 | orchestrator | ++ hash -r 2025-09-17 00:13:25.759158 | orchestrator | ++ '[' -n '' ']' 2025-09-17 00:13:25.759169 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-17 00:13:25.759180 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-17 00:13:25.759191 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-17 00:13:25.759267 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-17 00:13:25.759621 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-17 00:13:25.759642 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-17 00:13:25.759652 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-17 00:13:25.759664 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-17 00:13:25.759675 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-17 00:13:25.759686 | orchestrator | ++ export PATH 2025-09-17 00:13:25.759697 | orchestrator | ++ '[' -n '' ']' 2025-09-17 00:13:25.759708 | orchestrator | ++ '[' -z '' ']' 2025-09-17 00:13:25.759718 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-17 00:13:25.759729 | orchestrator | ++ PS1='(venv) ' 2025-09-17 00:13:25.759740 | orchestrator | ++ export PS1 2025-09-17 00:13:25.759750 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-17 00:13:25.759761 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-17 00:13:25.759771 | orchestrator | ++ hash -r 2025-09-17 00:13:25.759800 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-17 00:13:26.979150 | orchestrator | 2025-09-17 00:13:26.979259 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-17 00:13:26.979274 | orchestrator | 2025-09-17 00:13:26.979286 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-17 00:13:27.538084 | orchestrator | ok: [testbed-manager] 2025-09-17 00:13:27.538188 | orchestrator | 2025-09-17 00:13:27.538203 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-17 00:13:28.529277 | orchestrator | changed: [testbed-manager] 2025-09-17 00:13:28.529389 | orchestrator | 2025-09-17 00:13:28.529408 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-17 00:13:28.529422 | orchestrator | 2025-09-17 00:13:28.529434 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 00:13:30.850784 | orchestrator | ok: [testbed-manager] 2025-09-17 00:13:30.850899 | orchestrator | 2025-09-17 00:13:30.850915 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-17 00:13:30.904166 | orchestrator | ok: [testbed-manager] 2025-09-17 00:13:30.904248 | orchestrator | 2025-09-17 00:13:30.904264 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-17 00:13:31.369442 | orchestrator | changed: [testbed-manager] 2025-09-17 00:13:31.369594 | orchestrator | 2025-09-17 00:13:31.369610 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-17 00:13:31.411743 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:13:31.411804 | orchestrator | 2025-09-17 00:13:31.411821 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-17 00:13:31.741069 | orchestrator | changed: [testbed-manager] 2025-09-17 00:13:31.741164 | orchestrator | 2025-09-17 00:13:31.741178 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-17 00:13:31.792817 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:13:31.792864 | orchestrator | 2025-09-17 00:13:31.792877 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-17 00:13:32.134214 | orchestrator | ok: [testbed-manager] 2025-09-17 00:13:32.134344 | orchestrator | 2025-09-17 00:13:32.134362 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-17 00:13:32.246788 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:13:32.247735 | orchestrator | 2025-09-17 00:13:32.247764 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-17 00:13:32.247777 | orchestrator | 2025-09-17 00:13:32.247791 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 00:13:33.918986 | orchestrator | ok: [testbed-manager] 2025-09-17 00:13:33.919098 | orchestrator | 2025-09-17 00:13:33.919115 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-17 00:13:34.023087 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-17 00:13:34.023183 | orchestrator | 2025-09-17 00:13:34.023198 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-17 00:13:34.077044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-17 00:13:34.077168 | orchestrator | 2025-09-17 00:13:34.077184 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-17 00:13:35.148593 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-17 00:13:35.148719 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-17 00:13:35.148735 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-17 00:13:35.148747 | orchestrator | 2025-09-17 00:13:35.148760 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-17 00:13:36.897037 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-17 00:13:36.897167 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-17 00:13:36.897187 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-17 00:13:36.897201 | orchestrator | 2025-09-17 00:13:36.897215 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-17 00:13:37.523180 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 00:13:37.524048 | orchestrator | changed: [testbed-manager] 2025-09-17 00:13:37.524079 | orchestrator | 2025-09-17 00:13:37.524091 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-17 00:13:38.139466 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 00:13:38.139638 | orchestrator | changed: [testbed-manager] 2025-09-17 00:13:38.139656 | orchestrator | 2025-09-17 00:13:38.139670 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-17 00:13:38.194182 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:13:38.194258 | orchestrator | 2025-09-17 00:13:38.194272 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-17 00:13:38.528219 | orchestrator | ok: [testbed-manager] 2025-09-17 00:13:38.528325 | orchestrator | 2025-09-17 00:13:38.528340 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-17 00:13:38.604962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-17 00:13:38.605034 | orchestrator | 2025-09-17 00:13:38.605048 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-17 00:13:39.594190 | orchestrator | changed: [testbed-manager] 2025-09-17 00:13:39.594298 | orchestrator | 2025-09-17 00:13:39.594311 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-17 00:13:40.421796 | orchestrator | changed: [testbed-manager] 2025-09-17 00:13:40.421908 | orchestrator | 2025-09-17 00:13:40.421922 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-17 00:13:52.026391 | orchestrator | changed: [testbed-manager] 2025-09-17 00:13:52.026586 | orchestrator | 2025-09-17 00:13:52.026607 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-17 00:13:52.080270 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:13:52.080330 | orchestrator | 2025-09-17 00:13:52.080347 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-17 00:13:52.080361 | orchestrator | 2025-09-17 00:13:52.080373 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 00:13:53.923985 | orchestrator | ok: [testbed-manager] 2025-09-17 00:13:53.924129 | orchestrator | 2025-09-17 00:13:53.924202 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-17 00:13:54.033951 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-17 00:13:54.034168 | orchestrator | 2025-09-17 00:13:54.034185 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-17 00:13:54.090355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 00:13:54.090469 | orchestrator | 2025-09-17 00:13:54.090486 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-17 00:13:56.676972 | orchestrator | ok: [testbed-manager] 2025-09-17 00:13:56.677086 | orchestrator | 2025-09-17 00:13:56.677102 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-17 00:13:56.723292 | orchestrator | ok: [testbed-manager] 2025-09-17 00:13:56.723383 | orchestrator | 2025-09-17 00:13:56.723402 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-17 00:13:56.859023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-17 00:13:56.859121 | orchestrator | 2025-09-17 00:13:56.859135 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-17 00:13:59.830943 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-17 00:13:59.831054 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-17 00:13:59.831069 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-17 00:13:59.831081 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-17 00:13:59.831092 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-17 00:13:59.831103 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-17 00:13:59.831114 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-17 00:13:59.831124 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-17 00:13:59.831136 | orchestrator | 2025-09-17 00:13:59.831148 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-17 00:14:00.451949 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:00.452059 | orchestrator | 2025-09-17 00:14:00.452075 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-17 00:14:01.103625 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:01.103734 | orchestrator | 2025-09-17 00:14:01.103751 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-17 00:14:01.183203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-17 00:14:01.183294 | orchestrator | 2025-09-17 00:14:01.183311 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-17 00:14:02.406901 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-17 00:14:02.406993 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-17 00:14:02.407004 | orchestrator | 2025-09-17 00:14:02.407013 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-17 00:14:03.058838 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:03.058940 | orchestrator | 2025-09-17 00:14:03.058957 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-17 00:14:03.117793 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:14:03.117881 | orchestrator | 2025-09-17 00:14:03.117895 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-17 00:14:03.210222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-17 00:14:03.210289 | orchestrator | 2025-09-17 00:14:03.210302 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-17 00:14:03.847179 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:03.848096 | orchestrator | 2025-09-17 00:14:03.848134 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-17 00:14:03.913457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-17 00:14:03.913596 | orchestrator | 2025-09-17 00:14:03.913612 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-17 00:14:05.319375 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 00:14:05.319479 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 00:14:05.319494 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:05.319560 | orchestrator | 2025-09-17 00:14:05.319574 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-17 00:14:05.958722 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:05.958816 | orchestrator | 2025-09-17 00:14:05.958828 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-17 00:14:06.021044 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:14:06.021143 | orchestrator | 2025-09-17 00:14:06.021160 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-17 00:14:06.121101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-17 00:14:06.121164 | orchestrator | 2025-09-17 00:14:06.121178 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-17 00:14:06.649840 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:06.649933 | orchestrator | 2025-09-17 00:14:06.649947 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-17 00:14:07.071911 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:07.072004 | orchestrator | 2025-09-17 00:14:07.072017 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-17 00:14:08.323754 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-17 00:14:08.323865 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-17 00:14:08.323880 | orchestrator | 2025-09-17 00:14:08.323893 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-17 00:14:08.956918 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:08.957636 | orchestrator | 2025-09-17 00:14:08.957670 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-17 00:14:09.375932 | orchestrator | ok: [testbed-manager] 2025-09-17 00:14:09.376030 | orchestrator | 2025-09-17 00:14:09.376045 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-17 00:14:09.736212 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:09.736309 | orchestrator | 2025-09-17 00:14:09.736323 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-17 00:14:09.788421 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:14:09.788461 | orchestrator | 2025-09-17 00:14:09.788473 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-17 00:14:09.860278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-17 00:14:09.860350 | orchestrator | 2025-09-17 00:14:09.860364 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-17 00:14:09.902623 | orchestrator | ok: [testbed-manager] 2025-09-17 00:14:09.902663 | orchestrator | 2025-09-17 00:14:09.902675 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-17 00:14:11.945776 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-17 00:14:11.945880 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-17 00:14:11.945895 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-17 00:14:11.945907 | orchestrator | 2025-09-17 00:14:11.945920 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-17 00:14:12.696209 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:12.696308 | orchestrator | 2025-09-17 00:14:12.696327 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-17 00:14:13.445700 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:13.445806 | orchestrator | 2025-09-17 00:14:13.445823 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-17 00:14:14.176938 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:14.177704 | orchestrator | 2025-09-17 00:14:14.177734 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-17 00:14:14.258680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-17 00:14:14.258756 | orchestrator | 2025-09-17 00:14:14.258770 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-17 00:14:14.308941 | orchestrator | ok: [testbed-manager] 2025-09-17 00:14:14.309009 | orchestrator | 2025-09-17 00:14:14.309023 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-17 00:14:15.038610 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-17 00:14:15.038732 | orchestrator | 2025-09-17 00:14:15.038749 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-17 00:14:15.118363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-17 00:14:15.118455 | orchestrator | 2025-09-17 00:14:15.118469 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-17 00:14:15.873159 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:15.873256 | orchestrator | 2025-09-17 00:14:15.873273 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-17 00:14:16.464958 | orchestrator | ok: [testbed-manager] 2025-09-17 00:14:16.465063 | orchestrator | 2025-09-17 00:14:16.465077 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-17 00:14:16.521011 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:14:16.521053 | orchestrator | 2025-09-17 00:14:16.521067 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-17 00:14:16.575891 | orchestrator | ok: [testbed-manager] 2025-09-17 00:14:16.575958 | orchestrator | 2025-09-17 00:14:16.575974 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-17 00:14:17.451047 | orchestrator | changed: [testbed-manager] 2025-09-17 00:14:17.451145 | orchestrator | 2025-09-17 00:14:17.451159 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-17 00:15:23.460400 | orchestrator | changed: [testbed-manager] 2025-09-17 00:15:23.460528 | orchestrator | 2025-09-17 00:15:23.460547 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-17 00:15:24.394091 | orchestrator | ok: [testbed-manager] 2025-09-17 00:15:24.394194 | orchestrator | 2025-09-17 00:15:24.394210 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-17 00:15:24.450302 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:15:24.450359 | orchestrator | 2025-09-17 00:15:24.450378 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-17 00:15:26.845920 | orchestrator | changed: [testbed-manager] 2025-09-17 00:15:26.846073 | orchestrator | 2025-09-17 00:15:26.846091 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-17 00:15:26.900798 | orchestrator | ok: [testbed-manager] 2025-09-17 00:15:26.900878 | orchestrator | 2025-09-17 00:15:26.900892 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-17 00:15:26.900905 | orchestrator | 2025-09-17 00:15:26.900916 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-17 00:15:26.966109 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:15:26.966177 | orchestrator | 2025-09-17 00:15:26.966190 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-17 00:16:27.004071 | orchestrator | Pausing for 60 seconds 2025-09-17 00:16:27.004213 | orchestrator | changed: [testbed-manager] 2025-09-17 00:16:27.004228 | orchestrator | 2025-09-17 00:16:27.004241 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-17 00:16:30.985143 | orchestrator | changed: [testbed-manager] 2025-09-17 00:16:30.985272 | orchestrator | 2025-09-17 00:16:30.985288 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-17 00:17:12.702309 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-17 00:17:12.702433 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-17 00:17:12.702450 | orchestrator | changed: [testbed-manager] 2025-09-17 00:17:12.702491 | orchestrator | 2025-09-17 00:17:12.702504 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-17 00:17:22.068766 | orchestrator | changed: [testbed-manager] 2025-09-17 00:17:22.068885 | orchestrator | 2025-09-17 00:17:22.068905 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-17 00:17:22.146868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-17 00:17:22.146973 | orchestrator | 2025-09-17 00:17:22.146988 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-17 00:17:22.147000 | orchestrator | 2025-09-17 00:17:22.147012 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-17 00:17:22.211310 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:17:22.211382 | orchestrator | 2025-09-17 00:17:22.211398 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:17:22.211412 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-17 00:17:22.211423 | orchestrator | 2025-09-17 00:17:22.306620 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-17 00:17:22.306756 | orchestrator | + deactivate 2025-09-17 00:17:22.306771 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-17 00:17:22.306784 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-17 00:17:22.306795 | orchestrator | + export PATH 2025-09-17 00:17:22.306807 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-17 00:17:22.306818 | orchestrator | + '[' -n '' ']' 2025-09-17 00:17:22.306830 | orchestrator | + hash -r 2025-09-17 00:17:22.306861 | orchestrator | + '[' -n '' ']' 2025-09-17 00:17:22.306873 | orchestrator | + unset VIRTUAL_ENV 2025-09-17 00:17:22.306884 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-17 00:17:22.306895 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-17 00:17:22.306905 | orchestrator | + unset -f deactivate 2025-09-17 00:17:22.306917 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-17 00:17:22.316089 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-17 00:17:22.316139 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-17 00:17:22.316153 | orchestrator | + local max_attempts=60 2025-09-17 00:17:22.316166 | orchestrator | + local name=ceph-ansible 2025-09-17 00:17:22.316177 | orchestrator | + local attempt_num=1 2025-09-17 00:17:22.317097 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:17:22.359032 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:17:22.359083 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-17 00:17:22.359096 | orchestrator | + local max_attempts=60 2025-09-17 00:17:22.359109 | orchestrator | + local name=kolla-ansible 2025-09-17 00:17:22.359120 | orchestrator | + local attempt_num=1 2025-09-17 00:17:22.359959 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-17 00:17:22.402179 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:17:22.402231 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-17 00:17:22.402242 | orchestrator | + local max_attempts=60 2025-09-17 00:17:22.402253 | orchestrator | + local name=osism-ansible 2025-09-17 00:17:22.402263 | orchestrator | + local attempt_num=1 2025-09-17 00:17:22.403414 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-17 00:17:22.438888 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:17:22.438937 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-17 00:17:22.438952 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-17 00:17:23.160729 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-17 00:17:23.385215 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-17 00:17:23.385315 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-17 00:17:23.385330 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-17 00:17:23.385377 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-17 00:17:23.385391 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-17 00:17:23.385411 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-17 00:17:23.385423 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-17 00:17:23.385434 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-17 00:17:23.385445 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-17 00:17:23.385456 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-17 00:17:23.385467 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-17 00:17:23.385477 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-17 00:17:23.385488 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-17 00:17:23.385499 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-17 00:17:23.385510 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-17 00:17:23.385520 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-17 00:17:23.393300 | orchestrator | ++ semver latest 7.0.0 2025-09-17 00:17:23.453443 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-17 00:17:23.453522 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-17 00:17:23.453539 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-17 00:17:23.458189 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-17 00:17:35.631165 | orchestrator | 2025-09-17 00:17:35 | INFO  | Task 93557336-84e4-4e07-b5a7-eeedb12b09f7 (resolvconf) was prepared for execution. 2025-09-17 00:17:35.631287 | orchestrator | 2025-09-17 00:17:35 | INFO  | It takes a moment until task 93557336-84e4-4e07-b5a7-eeedb12b09f7 (resolvconf) has been started and output is visible here. 2025-09-17 00:17:49.393314 | orchestrator | 2025-09-17 00:17:49.393431 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-17 00:17:49.393448 | orchestrator | 2025-09-17 00:17:49.393460 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 00:17:49.393498 | orchestrator | Wednesday 17 September 2025 00:17:39 +0000 (0:00:00.147) 0:00:00.147 *** 2025-09-17 00:17:49.393511 | orchestrator | ok: [testbed-manager] 2025-09-17 00:17:49.393523 | orchestrator | 2025-09-17 00:17:49.393535 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-17 00:17:49.393546 | orchestrator | Wednesday 17 September 2025 00:17:44 +0000 (0:00:04.654) 0:00:04.801 *** 2025-09-17 00:17:49.393557 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:17:49.393568 | orchestrator | 2025-09-17 00:17:49.393579 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-17 00:17:49.393590 | orchestrator | Wednesday 17 September 2025 00:17:44 +0000 (0:00:00.071) 0:00:04.872 *** 2025-09-17 00:17:49.393600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-17 00:17:49.393612 | orchestrator | 2025-09-17 00:17:49.393623 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-17 00:17:49.393634 | orchestrator | Wednesday 17 September 2025 00:17:44 +0000 (0:00:00.081) 0:00:04.954 *** 2025-09-17 00:17:49.393645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 00:17:49.393655 | orchestrator | 2025-09-17 00:17:49.393724 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-17 00:17:49.393737 | orchestrator | Wednesday 17 September 2025 00:17:44 +0000 (0:00:00.074) 0:00:05.029 *** 2025-09-17 00:17:49.393747 | orchestrator | ok: [testbed-manager] 2025-09-17 00:17:49.393758 | orchestrator | 2025-09-17 00:17:49.393768 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-17 00:17:49.393779 | orchestrator | Wednesday 17 September 2025 00:17:45 +0000 (0:00:00.910) 0:00:05.939 *** 2025-09-17 00:17:49.393789 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:17:49.393800 | orchestrator | 2025-09-17 00:17:49.393811 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-17 00:17:49.393821 | orchestrator | Wednesday 17 September 2025 00:17:45 +0000 (0:00:00.051) 0:00:05.991 *** 2025-09-17 00:17:49.393832 | orchestrator | ok: [testbed-manager] 2025-09-17 00:17:49.393844 | orchestrator | 2025-09-17 00:17:49.393856 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-17 00:17:49.393868 | orchestrator | Wednesday 17 September 2025 00:17:45 +0000 (0:00:00.434) 0:00:06.425 *** 2025-09-17 00:17:49.393881 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:17:49.393892 | orchestrator | 2025-09-17 00:17:49.393904 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-17 00:17:49.393917 | orchestrator | Wednesday 17 September 2025 00:17:45 +0000 (0:00:00.070) 0:00:06.495 *** 2025-09-17 00:17:49.393930 | orchestrator | changed: [testbed-manager] 2025-09-17 00:17:49.393942 | orchestrator | 2025-09-17 00:17:49.393955 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-17 00:17:49.393966 | orchestrator | Wednesday 17 September 2025 00:17:46 +0000 (0:00:00.456) 0:00:06.952 *** 2025-09-17 00:17:49.393979 | orchestrator | changed: [testbed-manager] 2025-09-17 00:17:49.393991 | orchestrator | 2025-09-17 00:17:49.394003 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-17 00:17:49.394073 | orchestrator | Wednesday 17 September 2025 00:17:47 +0000 (0:00:00.940) 0:00:07.892 *** 2025-09-17 00:17:49.394089 | orchestrator | ok: [testbed-manager] 2025-09-17 00:17:49.394102 | orchestrator | 2025-09-17 00:17:49.394114 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-17 00:17:49.394126 | orchestrator | Wednesday 17 September 2025 00:17:48 +0000 (0:00:00.839) 0:00:08.732 *** 2025-09-17 00:17:49.394149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-17 00:17:49.394171 | orchestrator | 2025-09-17 00:17:49.394183 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-17 00:17:49.394195 | orchestrator | Wednesday 17 September 2025 00:17:48 +0000 (0:00:00.087) 0:00:08.820 *** 2025-09-17 00:17:49.394207 | orchestrator | changed: [testbed-manager] 2025-09-17 00:17:49.394218 | orchestrator | 2025-09-17 00:17:49.394228 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:17:49.394240 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 00:17:49.394250 | orchestrator | 2025-09-17 00:17:49.394261 | orchestrator | 2025-09-17 00:17:49.394272 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:17:49.394282 | orchestrator | Wednesday 17 September 2025 00:17:49 +0000 (0:00:01.076) 0:00:09.897 *** 2025-09-17 00:17:49.394293 | orchestrator | =============================================================================== 2025-09-17 00:17:49.394303 | orchestrator | Gathering Facts --------------------------------------------------------- 4.65s 2025-09-17 00:17:49.394314 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.08s 2025-09-17 00:17:49.394324 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.94s 2025-09-17 00:17:49.394334 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.91s 2025-09-17 00:17:49.394345 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.84s 2025-09-17 00:17:49.394356 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.46s 2025-09-17 00:17:49.394384 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.43s 2025-09-17 00:17:49.394395 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-17 00:17:49.394406 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-17 00:17:49.394416 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-17 00:17:49.394427 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-17 00:17:49.394437 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-09-17 00:17:49.394447 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-09-17 00:17:49.657824 | orchestrator | + osism apply sshconfig 2025-09-17 00:18:01.588851 | orchestrator | 2025-09-17 00:18:01 | INFO  | Task e84b5ef0-6e25-4f6d-8ae6-a573de8b35e3 (sshconfig) was prepared for execution. 2025-09-17 00:18:01.588972 | orchestrator | 2025-09-17 00:18:01 | INFO  | It takes a moment until task e84b5ef0-6e25-4f6d-8ae6-a573de8b35e3 (sshconfig) has been started and output is visible here. 2025-09-17 00:18:12.889130 | orchestrator | 2025-09-17 00:18:12.889249 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-17 00:18:12.889267 | orchestrator | 2025-09-17 00:18:12.889280 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-17 00:18:12.889291 | orchestrator | Wednesday 17 September 2025 00:18:05 +0000 (0:00:00.160) 0:00:00.160 *** 2025-09-17 00:18:12.889303 | orchestrator | ok: [testbed-manager] 2025-09-17 00:18:12.889315 | orchestrator | 2025-09-17 00:18:12.889326 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-17 00:18:12.889337 | orchestrator | Wednesday 17 September 2025 00:18:06 +0000 (0:00:00.560) 0:00:00.721 *** 2025-09-17 00:18:12.889347 | orchestrator | changed: [testbed-manager] 2025-09-17 00:18:12.889358 | orchestrator | 2025-09-17 00:18:12.889369 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-17 00:18:12.889382 | orchestrator | Wednesday 17 September 2025 00:18:06 +0000 (0:00:00.489) 0:00:01.210 *** 2025-09-17 00:18:12.889393 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-17 00:18:12.889404 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-17 00:18:12.889439 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-17 00:18:12.889451 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-17 00:18:12.889461 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-17 00:18:12.889490 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-17 00:18:12.889501 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-17 00:18:12.889511 | orchestrator | 2025-09-17 00:18:12.889522 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-17 00:18:12.889533 | orchestrator | Wednesday 17 September 2025 00:18:12 +0000 (0:00:05.552) 0:00:06.762 *** 2025-09-17 00:18:12.889543 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:18:12.889553 | orchestrator | 2025-09-17 00:18:12.889564 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-17 00:18:12.889574 | orchestrator | Wednesday 17 September 2025 00:18:12 +0000 (0:00:00.065) 0:00:06.827 *** 2025-09-17 00:18:12.889585 | orchestrator | changed: [testbed-manager] 2025-09-17 00:18:12.889595 | orchestrator | 2025-09-17 00:18:12.889605 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:18:12.889618 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:18:12.889629 | orchestrator | 2025-09-17 00:18:12.889640 | orchestrator | 2025-09-17 00:18:12.889651 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:18:12.889661 | orchestrator | Wednesday 17 September 2025 00:18:12 +0000 (0:00:00.554) 0:00:07.381 *** 2025-09-17 00:18:12.889734 | orchestrator | =============================================================================== 2025-09-17 00:18:12.889750 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.55s 2025-09-17 00:18:12.889763 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-09-17 00:18:12.889774 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2025-09-17 00:18:12.889787 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-09-17 00:18:12.889799 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-09-17 00:18:13.155459 | orchestrator | + osism apply known-hosts 2025-09-17 00:18:25.037388 | orchestrator | 2025-09-17 00:18:25 | INFO  | Task 9f144cb1-73c6-46cf-9df6-6d59d6bde163 (known-hosts) was prepared for execution. 2025-09-17 00:18:25.037506 | orchestrator | 2025-09-17 00:18:25 | INFO  | It takes a moment until task 9f144cb1-73c6-46cf-9df6-6d59d6bde163 (known-hosts) has been started and output is visible here. 2025-09-17 00:18:42.266802 | orchestrator | 2025-09-17 00:18:42.266919 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-17 00:18:42.266936 | orchestrator | 2025-09-17 00:18:42.266948 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-17 00:18:42.266961 | orchestrator | Wednesday 17 September 2025 00:18:28 +0000 (0:00:00.174) 0:00:00.174 *** 2025-09-17 00:18:42.266973 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-17 00:18:42.266985 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-17 00:18:42.266995 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-17 00:18:42.267006 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-17 00:18:42.267017 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-17 00:18:42.267028 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-17 00:18:42.267039 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-17 00:18:42.267049 | orchestrator | 2025-09-17 00:18:42.267060 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-17 00:18:42.267072 | orchestrator | Wednesday 17 September 2025 00:18:35 +0000 (0:00:06.904) 0:00:07.078 *** 2025-09-17 00:18:42.267106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-17 00:18:42.267121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-17 00:18:42.267131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-17 00:18:42.267142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-17 00:18:42.267153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-17 00:18:42.267174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-17 00:18:42.267186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-17 00:18:42.267197 | orchestrator | 2025-09-17 00:18:42.267208 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:42.267218 | orchestrator | Wednesday 17 September 2025 00:18:35 +0000 (0:00:00.163) 0:00:07.242 *** 2025-09-17 00:18:42.267229 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIzeypVkhZIAQtgnvnI54OywKgroXQHj+NrUizH8bha0) 2025-09-17 00:18:42.267245 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoyUp3RAqxATnq8PyERHG5ZIVzpK5ZZAteS/fwBCoWEJlL3q+64qZG1nwlJLKK2Sfc9KZ3lqJz/P9r9Y0CKiDOIAOoOzPYqt41gG+44iUb9TgnIb4SR/d2sd7ZRnoU3XitRipfqhU0b2t128lWggDbZYTcSztGhMjA6GlASaVaeChRaGtofWVYsV6e0U2wBxZFl6Vggry8AqNZU+ESvBkuv6LhgQfPc7n8qjs26R5hT5drLxSx9TwA1vx22/nxEqWFGWvrMEfmmbU3pNYf3Ss3ytXoqYI4LwdIbXLruG3qxaTbNAZM+X3mhZED+vMhHVa8LBbDHxSipY3XYWDX8ga/jbqUdUwEq0dYUVD4igX1nNZ3VX7UBeUShF59eJ0NXqPCoq693ZxVPTi1DrRiSdsXr4q1VmSacz70j6qmApNjhBloROOntFP+w22D0XWR4ZfDUfPoH7j4bGu5/bi3mLmXmyV/VRHbnPP36IkPePYMkUzKdM2AD76/7tUtocDGHd0=) 2025-09-17 00:18:42.267259 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPS0vdu9doU90hy1lpcdHBi5Yr6ULLGJ/xg52jLoFd/48OrJr+d1DxvZq0t4WTTZ7ENfcPe43HgWoTvLGF+0lOs=) 2025-09-17 00:18:42.267274 | orchestrator | 2025-09-17 00:18:42.267286 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:42.267298 | orchestrator | Wednesday 17 September 2025 00:18:37 +0000 (0:00:01.133) 0:00:08.375 *** 2025-09-17 00:18:42.267310 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNK3ywMeP433HcfBtyT9duXdondZzVBrllmtYtR4NyW6m012NqmOIr68HuWwUH9kA+tT4/aEs7a7IzRevT/3Ciw=) 2025-09-17 00:18:42.267323 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPipla4jggpDPYAU+6r4JNtw5lAr3plQIM8mUHrAcTBn) 2025-09-17 00:18:42.267362 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCh581CFf6F+pt4AR93BFEDi9+h7KpjvsQxmHzmZtJiXCAnYZq4i+ZDipU/+O40zbC1rIL6KrZAVkKZYWj3Wq3yuZxPA0DeMi+HeuUF3z1I819degmkQKp+mmWxAdAOt9DqQOcU5L94hXcjUEX87RFsfmp7NE8Pvhc/SlRqv0gRJktS7tk07lfG1c9sThopqthznzexbM5tDsDnUUC9v9e09aBzNB/IpuQl57LBu49NTQIwvwg2waQff9s48z1jmSw8OjZIBqhg6kI9i8PaWmBaaqIog/3byuw4DhSsVQ0rhhtY4oNTJirdGln7FU/I6wCc7xGOogDgjzpH+AL4QSNHFS6Eyid6cPn0KgRd4FyDUoeZ1tHEHdktGuwCulXmRBMEk1MK43IIZ3gsOU0hcp3li2uI9SEUmXykthT8+FUATpI6LQrjAC1cOLm3g9+geXLQw+I06OkaZMr6K2wbtVTiRq+pargvaD/DFqLMOzld6t4NAvw4p/Wc5qgkeVsKFgU=) 2025-09-17 00:18:42.267384 | orchestrator | 2025-09-17 00:18:42.267397 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:42.267409 | orchestrator | Wednesday 17 September 2025 00:18:38 +0000 (0:00:01.030) 0:00:09.406 *** 2025-09-17 00:18:42.267421 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD2njF7URqmAx2NCemcepq+TKKNn5lB6VLU8GtOdCNKCT2exdH01Ln0WHmQsHY64fT2oxGExBcFwgBV8v6gRQX4J2C7n4ITHyqpXO+uUHbJm0fGHDzzCyhNVPB/Q1yZSDWXru39WGwLje8JJ3gz/ngLCWNvRBVq+TUWsydpulnwn1T0ruNYxZFGib0ef2Up2JF7/NnGceF0hPXB2ue9ONAvM9SYQQTd0FLZ+udOBnIa33FAZ9f4N9VbQRWN6ZhcmqIDrJYhsGvet0vbpCFT0cOoLIy62khys3DMMTXybaBshg8VM2vv7Y7VAHE4NmgZTzqTgpMM80w1lZ1DBWXHDAzuvWEfI6uDywcVHMRBuGzrOkk1LJYnYtVv0hs+lmVUOB+7n4cDX6UUtLflCL7217pwIaFYp1fqrxlNrlGXTwei5kmjIxHwBFexhI4zra+wvIwVnN2UGwjXHLXveq3dVR0GIkaNNQ1c8F8BngBUljTOI3VLAC9exahGCxgjJnmZtQs=) 2025-09-17 00:18:42.267435 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBJB/meqtcWqYq1NtCvbyMoTMm//+albutbelILohI0k6VuE3LDLp68YXSxbbtBlqbs8PlvdccAvC70AfDbHuPg=) 2025-09-17 00:18:42.267448 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJRUKY2l8dvrlOwZEi+qSin20pOcLNu9IMXN1+31zC97) 2025-09-17 00:18:42.267460 | orchestrator | 2025-09-17 00:18:42.267472 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:42.267484 | orchestrator | Wednesday 17 September 2025 00:18:39 +0000 (0:00:01.054) 0:00:10.460 *** 2025-09-17 00:18:42.267562 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDECZJjQWMtQ7YWi+Cga4+zVPIuQjVF/YqvWuPRNY45RU6NngmUcGexZaYRNFXlgmENganGWHvISBshVBM4x6SP7wFlNr/NhRYFzmc6cSNaxGuzMWNjA5jjohOKCz8qX4lu2cdgkZ7VmntPUryZI0lhWgijG5fni70eNJ8EXpYk0sMCG2j3X4KS6YWTbkqB+Olx4boZ7/4sMsesq98ZlDG0Qt3mMqERvGMPcthMIr8SE8civXs4NS6gAqqk1YKuYYU0QuPMu6BgSgX2hOkDnuXNeFf5EsgdSSNIw3LhadF39eyBuZU8v0gctk7Pnu3T8zr1sZTLQ4qz0gCLrg7dpm4diqle7YrdbO0Yu3AvINtxRxc0sCNoHxlFqFEqrvgQYQFoqpTHRrFXS+6mTYrAIk1p9orCAEA05gtW18QUDwR6Qfn9Jb5pRTGHB/8HW7R4N394sTGGusSRZA6/Qe+usPdmvwCFkHpwMiY87ly/vTUJEWbAjI/no00lm8Oo9O0HqG0=) 2025-09-17 00:18:42.267576 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL6d0uzTKUMeo/g23IyORUiINj0wjZ9kY5e/4/JRuunT) 2025-09-17 00:18:42.267589 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHBk7CKJTbbP0Kp872Zg3vWtYrf5dkGQ7TGtjTMbGPgtzPzYhmMAiExSdE5TN+vLFRKsXD0BaezRdtAsr/xZP2o=) 2025-09-17 00:18:42.267601 | orchestrator | 2025-09-17 00:18:42.267613 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:42.267626 | orchestrator | Wednesday 17 September 2025 00:18:40 +0000 (0:00:01.007) 0:00:11.468 *** 2025-09-17 00:18:42.267639 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKblqW3y/jroEJ5zUouyyxQ8wYkcT/SGkLdf2zicCZzo) 2025-09-17 00:18:42.267650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY0ojnANi07oBzvPE+WGju0xG0fl+SYxSCBBL/FuUu0WLqCOq8O7SKIUF2wITNfCwJQQBmbiDJxvM3le2uB35EX1yIi6Rj7PYo5I6DhFqFqYXDrFlgI1+hNScVdbYx62kC1Xgmgf7CwHNaZ9cnoIvB4sItan1pIxeJJNnYn2HtCuUPDPWXYOpNdYAQjITQbDv7gV7H27+OuX4MLepoZ+U+1lJLWX5K06mnE8O8VEMfZ8HGTpNEUUNSYzS5gpijL+Cq/uJeoLB68Wf1j4sV90G++6AsEMiP2cMp+qqb7IsYZ5yvxWDJpVeG/71yeAJr0upVmzut5VMHoxjMOwSL1YBDu9963KADg4ymRP/QoOaqr/N9HgSCmQbJ/399C9CEFF/N4lUh3LZSE1NcoyNU6CW0a/68g2AUvEzbimowjmQmhcANC4onnrOtEfhNVPedhXkTTnAHTHp8NgS6awJgJvxf4+EZOxJUfoyf0SbS63ORR653uNLpBYNM3H8rbP5UVik=) 2025-09-17 00:18:42.267661 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLQbZRAwV2lHBE0iHoNHrLWc3yJ4xyldWVViEH6pJXUp7OqVjraTBqrxTrbdQJZXjF4MrrSUZfSalCKQyYa0jt4=) 2025-09-17 00:18:42.267703 | orchestrator | 2025-09-17 00:18:42.267714 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:42.267725 | orchestrator | Wednesday 17 September 2025 00:18:41 +0000 (0:00:01.043) 0:00:12.512 *** 2025-09-17 00:18:42.267743 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGWtFvVhCTL3h5eayI6Cm3O/DfUoaowdBuMUho8sckVm5Orl+Peap/AFP+OICjLS52PRKRGO8e/YFjkAFs1swdA=) 2025-09-17 00:18:53.092885 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCifuvvQ/BsN657nZDOkGuZ1B4zBvKaEFF1bX70cpP4C/8l1bPpG6JWTVGfJ/bpJwqHjbVxG3pP+dX7jDVRRiH+x7vdkhJGZMqrcuPOvwcGaEuZUW+gYX7jfBsm5WA7DCWpWCqWT20A6AzVq/eSJNBl+aAw/8iSAxCMpbsy+qw9QWKSL5uhVXY94GglWj+ut1cJAO+nIeOZgxtFx1+bfzgko5I8qeEC5os36nBB035U3471fr3iLU34FyeGYFmAZ6weMov+JTgicVHiQq+jnh/h17oqCInR8U+XDKjsQHm+itdohQvVxhYHcbc66fYILiqVe54y1Z51NNF6IdEtecxhbw4tFns82rzaDdVw6QFEnJd0/z+VBnWnwCXlSFLW44iowySe3liCyU0L3cbROaOgAzw0ytef4Y7Svgi11/ad4oxS+OVTZYWl+1muu3U7b2AmiAKH4J9bAdHUU4+/vbG2XVC7HbVkLY7ERvC0I9+8ZddWe59dagFK9asNtSL+mME=) 2025-09-17 00:18:53.093005 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFD2hNNfnkpC9wUbRhp6vcBLU/mfEZ7tFiX7nhLdkOg8) 2025-09-17 00:18:53.093023 | orchestrator | 2025-09-17 00:18:53.093036 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:53.093048 | orchestrator | Wednesday 17 September 2025 00:18:42 +0000 (0:00:01.039) 0:00:13.551 *** 2025-09-17 00:18:53.093060 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG5eDJug2q5jcmtavnsn2hokNcgMep9WWt4ZPLLmQdUu) 2025-09-17 00:18:53.093071 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOsoRtFEsmrfUF5ZIFdfW/5OYTwHuTWKUkXF+AQHFu3R1XRIcJjneT9tkOIbmUn8UHq5/3DfoSe/Vm6cPOiwTRnuMnW5LqrYaHBNUio9yITHyM5mePMSNWAdAA5ukoys01VBbApkg+dsVf+xDdcUpfrtHkfPjBluuudFZtTG7FmrdaRHg9dffW38NNy61jmGz5MAbVTWiF1T8vJcuTbe/Wnm+xQeMqjdS5R60Z1a33CAp+Gl6uO/iMXfdmjg9b3ao+KMJ2EtiJih2aHd00QL/E8DNMCQlvcyUIdvbBeRk6t5cDx+b+kXUoqxdPVkmNYi+eHqhBGTAD0zCp29BbMKJtNAL62lmChqp3boynHyZzoqryQ/wZVVnYBZ9KapCIEpW7Rf+ip/lZIhEPILqxt43fatg0hf5K5kHcPPp8qf3rUUN7jtHywDuqEKHYTZk/4xo3u/8ujcb2d1OyNFfMUS17Z/bzAbPwsJA6tqAcc10wAbNpJA5VYLlTlOjyFckvZMs=) 2025-09-17 00:18:53.093083 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEp57DIE6xIjLo+LB4WSsbROeMN9q9qFgkLOjBxd2/yExkkp671ZjRCYv43BDN0PT1WhtvOzIvhrvDXj55Q21Qs=) 2025-09-17 00:18:53.093096 | orchestrator | 2025-09-17 00:18:53.093107 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-17 00:18:53.093119 | orchestrator | Wednesday 17 September 2025 00:18:43 +0000 (0:00:01.062) 0:00:14.614 *** 2025-09-17 00:18:53.093130 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-17 00:18:53.093141 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-17 00:18:53.093152 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-17 00:18:53.093163 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-17 00:18:53.093173 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-17 00:18:53.093184 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-17 00:18:53.093195 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-17 00:18:53.093205 | orchestrator | 2025-09-17 00:18:53.093216 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-17 00:18:53.093246 | orchestrator | Wednesday 17 September 2025 00:18:48 +0000 (0:00:05.215) 0:00:19.829 *** 2025-09-17 00:18:53.093259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-17 00:18:53.093272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-17 00:18:53.093307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-17 00:18:53.093319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-17 00:18:53.093330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-17 00:18:53.093340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-17 00:18:53.093351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-17 00:18:53.093362 | orchestrator | 2025-09-17 00:18:53.093389 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:53.093401 | orchestrator | Wednesday 17 September 2025 00:18:48 +0000 (0:00:00.195) 0:00:20.025 *** 2025-09-17 00:18:53.093412 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIzeypVkhZIAQtgnvnI54OywKgroXQHj+NrUizH8bha0) 2025-09-17 00:18:53.093428 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoyUp3RAqxATnq8PyERHG5ZIVzpK5ZZAteS/fwBCoWEJlL3q+64qZG1nwlJLKK2Sfc9KZ3lqJz/P9r9Y0CKiDOIAOoOzPYqt41gG+44iUb9TgnIb4SR/d2sd7ZRnoU3XitRipfqhU0b2t128lWggDbZYTcSztGhMjA6GlASaVaeChRaGtofWVYsV6e0U2wBxZFl6Vggry8AqNZU+ESvBkuv6LhgQfPc7n8qjs26R5hT5drLxSx9TwA1vx22/nxEqWFGWvrMEfmmbU3pNYf3Ss3ytXoqYI4LwdIbXLruG3qxaTbNAZM+X3mhZED+vMhHVa8LBbDHxSipY3XYWDX8ga/jbqUdUwEq0dYUVD4igX1nNZ3VX7UBeUShF59eJ0NXqPCoq693ZxVPTi1DrRiSdsXr4q1VmSacz70j6qmApNjhBloROOntFP+w22D0XWR4ZfDUfPoH7j4bGu5/bi3mLmXmyV/VRHbnPP36IkPePYMkUzKdM2AD76/7tUtocDGHd0=) 2025-09-17 00:18:53.093442 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPS0vdu9doU90hy1lpcdHBi5Yr6ULLGJ/xg52jLoFd/48OrJr+d1DxvZq0t4WTTZ7ENfcPe43HgWoTvLGF+0lOs=) 2025-09-17 00:18:53.093454 | orchestrator | 2025-09-17 00:18:53.093467 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:53.093480 | orchestrator | Wednesday 17 September 2025 00:18:49 +0000 (0:00:01.073) 0:00:21.098 *** 2025-09-17 00:18:53.093493 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCh581CFf6F+pt4AR93BFEDi9+h7KpjvsQxmHzmZtJiXCAnYZq4i+ZDipU/+O40zbC1rIL6KrZAVkKZYWj3Wq3yuZxPA0DeMi+HeuUF3z1I819degmkQKp+mmWxAdAOt9DqQOcU5L94hXcjUEX87RFsfmp7NE8Pvhc/SlRqv0gRJktS7tk07lfG1c9sThopqthznzexbM5tDsDnUUC9v9e09aBzNB/IpuQl57LBu49NTQIwvwg2waQff9s48z1jmSw8OjZIBqhg6kI9i8PaWmBaaqIog/3byuw4DhSsVQ0rhhtY4oNTJirdGln7FU/I6wCc7xGOogDgjzpH+AL4QSNHFS6Eyid6cPn0KgRd4FyDUoeZ1tHEHdktGuwCulXmRBMEk1MK43IIZ3gsOU0hcp3li2uI9SEUmXykthT8+FUATpI6LQrjAC1cOLm3g9+geXLQw+I06OkaZMr6K2wbtVTiRq+pargvaD/DFqLMOzld6t4NAvw4p/Wc5qgkeVsKFgU=) 2025-09-17 00:18:53.093506 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNK3ywMeP433HcfBtyT9duXdondZzVBrllmtYtR4NyW6m012NqmOIr68HuWwUH9kA+tT4/aEs7a7IzRevT/3Ciw=) 2025-09-17 00:18:53.093519 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPipla4jggpDPYAU+6r4JNtw5lAr3plQIM8mUHrAcTBn) 2025-09-17 00:18:53.093531 | orchestrator | 2025-09-17 00:18:53.093544 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:53.093556 | orchestrator | Wednesday 17 September 2025 00:18:50 +0000 (0:00:01.069) 0:00:22.167 *** 2025-09-17 00:18:53.093577 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD2njF7URqmAx2NCemcepq+TKKNn5lB6VLU8GtOdCNKCT2exdH01Ln0WHmQsHY64fT2oxGExBcFwgBV8v6gRQX4J2C7n4ITHyqpXO+uUHbJm0fGHDzzCyhNVPB/Q1yZSDWXru39WGwLje8JJ3gz/ngLCWNvRBVq+TUWsydpulnwn1T0ruNYxZFGib0ef2Up2JF7/NnGceF0hPXB2ue9ONAvM9SYQQTd0FLZ+udOBnIa33FAZ9f4N9VbQRWN6ZhcmqIDrJYhsGvet0vbpCFT0cOoLIy62khys3DMMTXybaBshg8VM2vv7Y7VAHE4NmgZTzqTgpMM80w1lZ1DBWXHDAzuvWEfI6uDywcVHMRBuGzrOkk1LJYnYtVv0hs+lmVUOB+7n4cDX6UUtLflCL7217pwIaFYp1fqrxlNrlGXTwei5kmjIxHwBFexhI4zra+wvIwVnN2UGwjXHLXveq3dVR0GIkaNNQ1c8F8BngBUljTOI3VLAC9exahGCxgjJnmZtQs=) 2025-09-17 00:18:53.093591 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBJB/meqtcWqYq1NtCvbyMoTMm//+albutbelILohI0k6VuE3LDLp68YXSxbbtBlqbs8PlvdccAvC70AfDbHuPg=) 2025-09-17 00:18:53.093603 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJRUKY2l8dvrlOwZEi+qSin20pOcLNu9IMXN1+31zC97) 2025-09-17 00:18:53.093615 | orchestrator | 2025-09-17 00:18:53.093628 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:53.093641 | orchestrator | Wednesday 17 September 2025 00:18:51 +0000 (0:00:01.062) 0:00:23.230 *** 2025-09-17 00:18:53.093696 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDECZJjQWMtQ7YWi+Cga4+zVPIuQjVF/YqvWuPRNY45RU6NngmUcGexZaYRNFXlgmENganGWHvISBshVBM4x6SP7wFlNr/NhRYFzmc6cSNaxGuzMWNjA5jjohOKCz8qX4lu2cdgkZ7VmntPUryZI0lhWgijG5fni70eNJ8EXpYk0sMCG2j3X4KS6YWTbkqB+Olx4boZ7/4sMsesq98ZlDG0Qt3mMqERvGMPcthMIr8SE8civXs4NS6gAqqk1YKuYYU0QuPMu6BgSgX2hOkDnuXNeFf5EsgdSSNIw3LhadF39eyBuZU8v0gctk7Pnu3T8zr1sZTLQ4qz0gCLrg7dpm4diqle7YrdbO0Yu3AvINtxRxc0sCNoHxlFqFEqrvgQYQFoqpTHRrFXS+6mTYrAIk1p9orCAEA05gtW18QUDwR6Qfn9Jb5pRTGHB/8HW7R4N394sTGGusSRZA6/Qe+usPdmvwCFkHpwMiY87ly/vTUJEWbAjI/no00lm8Oo9O0HqG0=) 2025-09-17 00:18:57.398862 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHBk7CKJTbbP0Kp872Zg3vWtYrf5dkGQ7TGtjTMbGPgtzPzYhmMAiExSdE5TN+vLFRKsXD0BaezRdtAsr/xZP2o=) 2025-09-17 00:18:57.398969 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL6d0uzTKUMeo/g23IyORUiINj0wjZ9kY5e/4/JRuunT) 2025-09-17 00:18:57.398986 | orchestrator | 2025-09-17 00:18:57.398999 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:57.399011 | orchestrator | Wednesday 17 September 2025 00:18:53 +0000 (0:00:01.143) 0:00:24.373 *** 2025-09-17 00:18:57.399021 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKblqW3y/jroEJ5zUouyyxQ8wYkcT/SGkLdf2zicCZzo) 2025-09-17 00:18:57.399034 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY0ojnANi07oBzvPE+WGju0xG0fl+SYxSCBBL/FuUu0WLqCOq8O7SKIUF2wITNfCwJQQBmbiDJxvM3le2uB35EX1yIi6Rj7PYo5I6DhFqFqYXDrFlgI1+hNScVdbYx62kC1Xgmgf7CwHNaZ9cnoIvB4sItan1pIxeJJNnYn2HtCuUPDPWXYOpNdYAQjITQbDv7gV7H27+OuX4MLepoZ+U+1lJLWX5K06mnE8O8VEMfZ8HGTpNEUUNSYzS5gpijL+Cq/uJeoLB68Wf1j4sV90G++6AsEMiP2cMp+qqb7IsYZ5yvxWDJpVeG/71yeAJr0upVmzut5VMHoxjMOwSL1YBDu9963KADg4ymRP/QoOaqr/N9HgSCmQbJ/399C9CEFF/N4lUh3LZSE1NcoyNU6CW0a/68g2AUvEzbimowjmQmhcANC4onnrOtEfhNVPedhXkTTnAHTHp8NgS6awJgJvxf4+EZOxJUfoyf0SbS63ORR653uNLpBYNM3H8rbP5UVik=) 2025-09-17 00:18:57.399048 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLQbZRAwV2lHBE0iHoNHrLWc3yJ4xyldWVViEH6pJXUp7OqVjraTBqrxTrbdQJZXjF4MrrSUZfSalCKQyYa0jt4=) 2025-09-17 00:18:57.399059 | orchestrator | 2025-09-17 00:18:57.399070 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:57.399080 | orchestrator | Wednesday 17 September 2025 00:18:54 +0000 (0:00:01.084) 0:00:25.458 *** 2025-09-17 00:18:57.399092 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGWtFvVhCTL3h5eayI6Cm3O/DfUoaowdBuMUho8sckVm5Orl+Peap/AFP+OICjLS52PRKRGO8e/YFjkAFs1swdA=) 2025-09-17 00:18:57.399127 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCifuvvQ/BsN657nZDOkGuZ1B4zBvKaEFF1bX70cpP4C/8l1bPpG6JWTVGfJ/bpJwqHjbVxG3pP+dX7jDVRRiH+x7vdkhJGZMqrcuPOvwcGaEuZUW+gYX7jfBsm5WA7DCWpWCqWT20A6AzVq/eSJNBl+aAw/8iSAxCMpbsy+qw9QWKSL5uhVXY94GglWj+ut1cJAO+nIeOZgxtFx1+bfzgko5I8qeEC5os36nBB035U3471fr3iLU34FyeGYFmAZ6weMov+JTgicVHiQq+jnh/h17oqCInR8U+XDKjsQHm+itdohQvVxhYHcbc66fYILiqVe54y1Z51NNF6IdEtecxhbw4tFns82rzaDdVw6QFEnJd0/z+VBnWnwCXlSFLW44iowySe3liCyU0L3cbROaOgAzw0ytef4Y7Svgi11/ad4oxS+OVTZYWl+1muu3U7b2AmiAKH4J9bAdHUU4+/vbG2XVC7HbVkLY7ERvC0I9+8ZddWe59dagFK9asNtSL+mME=) 2025-09-17 00:18:57.399140 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFD2hNNfnkpC9wUbRhp6vcBLU/mfEZ7tFiX7nhLdkOg8) 2025-09-17 00:18:57.399151 | orchestrator | 2025-09-17 00:18:57.399162 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 00:18:57.399172 | orchestrator | Wednesday 17 September 2025 00:18:55 +0000 (0:00:01.041) 0:00:26.500 *** 2025-09-17 00:18:57.399183 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEp57DIE6xIjLo+LB4WSsbROeMN9q9qFgkLOjBxd2/yExkkp671ZjRCYv43BDN0PT1WhtvOzIvhrvDXj55Q21Qs=) 2025-09-17 00:18:57.399194 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOsoRtFEsmrfUF5ZIFdfW/5OYTwHuTWKUkXF+AQHFu3R1XRIcJjneT9tkOIbmUn8UHq5/3DfoSe/Vm6cPOiwTRnuMnW5LqrYaHBNUio9yITHyM5mePMSNWAdAA5ukoys01VBbApkg+dsVf+xDdcUpfrtHkfPjBluuudFZtTG7FmrdaRHg9dffW38NNy61jmGz5MAbVTWiF1T8vJcuTbe/Wnm+xQeMqjdS5R60Z1a33CAp+Gl6uO/iMXfdmjg9b3ao+KMJ2EtiJih2aHd00QL/E8DNMCQlvcyUIdvbBeRk6t5cDx+b+kXUoqxdPVkmNYi+eHqhBGTAD0zCp29BbMKJtNAL62lmChqp3boynHyZzoqryQ/wZVVnYBZ9KapCIEpW7Rf+ip/lZIhEPILqxt43fatg0hf5K5kHcPPp8qf3rUUN7jtHywDuqEKHYTZk/4xo3u/8ujcb2d1OyNFfMUS17Z/bzAbPwsJA6tqAcc10wAbNpJA5VYLlTlOjyFckvZMs=) 2025-09-17 00:18:57.399206 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG5eDJug2q5jcmtavnsn2hokNcgMep9WWt4ZPLLmQdUu) 2025-09-17 00:18:57.399216 | orchestrator | 2025-09-17 00:18:57.399227 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-17 00:18:57.399237 | orchestrator | Wednesday 17 September 2025 00:18:56 +0000 (0:00:01.026) 0:00:27.527 *** 2025-09-17 00:18:57.399249 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-17 00:18:57.399260 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-17 00:18:57.399287 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-17 00:18:57.399299 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-17 00:18:57.399309 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-17 00:18:57.399320 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-17 00:18:57.399330 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-17 00:18:57.399342 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:18:57.399353 | orchestrator | 2025-09-17 00:18:57.399364 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-17 00:18:57.399374 | orchestrator | Wednesday 17 September 2025 00:18:56 +0000 (0:00:00.162) 0:00:27.689 *** 2025-09-17 00:18:57.399384 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:18:57.399397 | orchestrator | 2025-09-17 00:18:57.399410 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-17 00:18:57.399421 | orchestrator | Wednesday 17 September 2025 00:18:56 +0000 (0:00:00.069) 0:00:27.759 *** 2025-09-17 00:18:57.399434 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:18:57.399446 | orchestrator | 2025-09-17 00:18:57.399458 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-17 00:18:57.399471 | orchestrator | Wednesday 17 September 2025 00:18:56 +0000 (0:00:00.054) 0:00:27.814 *** 2025-09-17 00:18:57.399491 | orchestrator | changed: [testbed-manager] 2025-09-17 00:18:57.399503 | orchestrator | 2025-09-17 00:18:57.399516 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:18:57.399528 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 00:18:57.399541 | orchestrator | 2025-09-17 00:18:57.399553 | orchestrator | 2025-09-17 00:18:57.399566 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:18:57.399578 | orchestrator | Wednesday 17 September 2025 00:18:57 +0000 (0:00:00.516) 0:00:28.331 *** 2025-09-17 00:18:57.399590 | orchestrator | =============================================================================== 2025-09-17 00:18:57.399602 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.90s 2025-09-17 00:18:57.399614 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.22s 2025-09-17 00:18:57.399642 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-09-17 00:18:57.399655 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-17 00:18:57.399691 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-17 00:18:57.399703 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-17 00:18:57.399715 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-17 00:18:57.399728 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-17 00:18:57.399740 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-17 00:18:57.399752 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-17 00:18:57.399762 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-17 00:18:57.399772 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-17 00:18:57.399783 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-17 00:18:57.399793 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-17 00:18:57.399804 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-17 00:18:57.399814 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-09-17 00:18:57.399825 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.52s 2025-09-17 00:18:57.399836 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2025-09-17 00:18:57.399847 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-09-17 00:18:57.399857 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-09-17 00:18:57.664745 | orchestrator | + osism apply squid 2025-09-17 00:19:09.577462 | orchestrator | 2025-09-17 00:19:09 | INFO  | Task 62abedbf-fd99-447b-b5e3-573f5d880b43 (squid) was prepared for execution. 2025-09-17 00:19:09.578208 | orchestrator | 2025-09-17 00:19:09 | INFO  | It takes a moment until task 62abedbf-fd99-447b-b5e3-573f5d880b43 (squid) has been started and output is visible here. 2025-09-17 00:21:03.948228 | orchestrator | 2025-09-17 00:21:03.948349 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-17 00:21:03.948366 | orchestrator | 2025-09-17 00:21:03.948378 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-17 00:21:03.948389 | orchestrator | Wednesday 17 September 2025 00:19:13 +0000 (0:00:00.160) 0:00:00.160 *** 2025-09-17 00:21:03.948419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 00:21:03.948432 | orchestrator | 2025-09-17 00:21:03.948442 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-17 00:21:03.948477 | orchestrator | Wednesday 17 September 2025 00:19:13 +0000 (0:00:00.084) 0:00:00.244 *** 2025-09-17 00:21:03.948488 | orchestrator | ok: [testbed-manager] 2025-09-17 00:21:03.948500 | orchestrator | 2025-09-17 00:21:03.948511 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-17 00:21:03.948521 | orchestrator | Wednesday 17 September 2025 00:19:14 +0000 (0:00:01.360) 0:00:01.605 *** 2025-09-17 00:21:03.948532 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-17 00:21:03.948542 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-17 00:21:03.948553 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-17 00:21:03.948563 | orchestrator | 2025-09-17 00:21:03.948574 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-17 00:21:03.948584 | orchestrator | Wednesday 17 September 2025 00:19:15 +0000 (0:00:01.092) 0:00:02.697 *** 2025-09-17 00:21:03.948595 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-17 00:21:03.948606 | orchestrator | 2025-09-17 00:21:03.948616 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-17 00:21:03.948627 | orchestrator | Wednesday 17 September 2025 00:19:16 +0000 (0:00:00.999) 0:00:03.696 *** 2025-09-17 00:21:03.948637 | orchestrator | ok: [testbed-manager] 2025-09-17 00:21:03.948647 | orchestrator | 2025-09-17 00:21:03.948704 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-17 00:21:03.948716 | orchestrator | Wednesday 17 September 2025 00:19:17 +0000 (0:00:00.356) 0:00:04.053 *** 2025-09-17 00:21:03.948726 | orchestrator | changed: [testbed-manager] 2025-09-17 00:21:03.948737 | orchestrator | 2025-09-17 00:21:03.948748 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-17 00:21:03.948758 | orchestrator | Wednesday 17 September 2025 00:19:18 +0000 (0:00:00.884) 0:00:04.937 *** 2025-09-17 00:21:03.948772 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-17 00:21:03.948785 | orchestrator | ok: [testbed-manager] 2025-09-17 00:21:03.948797 | orchestrator | 2025-09-17 00:21:03.948809 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-17 00:21:03.948821 | orchestrator | Wednesday 17 September 2025 00:19:50 +0000 (0:00:32.404) 0:00:37.341 *** 2025-09-17 00:21:03.948833 | orchestrator | changed: [testbed-manager] 2025-09-17 00:21:03.948845 | orchestrator | 2025-09-17 00:21:03.948858 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-17 00:21:03.948870 | orchestrator | Wednesday 17 September 2025 00:20:02 +0000 (0:00:12.311) 0:00:49.653 *** 2025-09-17 00:21:03.948883 | orchestrator | Pausing for 60 seconds 2025-09-17 00:21:03.948896 | orchestrator | changed: [testbed-manager] 2025-09-17 00:21:03.948908 | orchestrator | 2025-09-17 00:21:03.948921 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-17 00:21:03.948933 | orchestrator | Wednesday 17 September 2025 00:21:02 +0000 (0:01:00.085) 0:01:49.738 *** 2025-09-17 00:21:03.948945 | orchestrator | ok: [testbed-manager] 2025-09-17 00:21:03.948957 | orchestrator | 2025-09-17 00:21:03.948970 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-17 00:21:03.948981 | orchestrator | Wednesday 17 September 2025 00:21:03 +0000 (0:00:00.065) 0:01:49.804 *** 2025-09-17 00:21:03.948994 | orchestrator | changed: [testbed-manager] 2025-09-17 00:21:03.949006 | orchestrator | 2025-09-17 00:21:03.949018 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:21:03.949030 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:21:03.949042 | orchestrator | 2025-09-17 00:21:03.949055 | orchestrator | 2025-09-17 00:21:03.949067 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:21:03.949080 | orchestrator | Wednesday 17 September 2025 00:21:03 +0000 (0:00:00.659) 0:01:50.464 *** 2025-09-17 00:21:03.949110 | orchestrator | =============================================================================== 2025-09-17 00:21:03.949122 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-09-17 00:21:03.949135 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.40s 2025-09-17 00:21:03.949145 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.31s 2025-09-17 00:21:03.949156 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.36s 2025-09-17 00:21:03.949166 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.09s 2025-09-17 00:21:03.949177 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.00s 2025-09-17 00:21:03.949187 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2025-09-17 00:21:03.949197 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-09-17 00:21:03.949208 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-09-17 00:21:03.949218 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-09-17 00:21:03.949229 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-09-17 00:21:04.207281 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-17 00:21:04.207768 | orchestrator | ++ semver latest 9.0.0 2025-09-17 00:21:04.265928 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-17 00:21:04.266002 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-17 00:21:04.266382 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-17 00:21:16.217456 | orchestrator | 2025-09-17 00:21:16 | INFO  | Task 4f05cbd7-5244-40ca-bb31-e3493ef341ae (operator) was prepared for execution. 2025-09-17 00:21:16.217568 | orchestrator | 2025-09-17 00:21:16 | INFO  | It takes a moment until task 4f05cbd7-5244-40ca-bb31-e3493ef341ae (operator) has been started and output is visible here. 2025-09-17 00:21:31.902556 | orchestrator | 2025-09-17 00:21:31.902721 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-17 00:21:31.902740 | orchestrator | 2025-09-17 00:21:31.902752 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 00:21:31.902764 | orchestrator | Wednesday 17 September 2025 00:21:20 +0000 (0:00:00.111) 0:00:00.111 *** 2025-09-17 00:21:31.902792 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:21:31.902805 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:21:31.902816 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:21:31.902827 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:21:31.902838 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:21:31.902848 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:21:31.902859 | orchestrator | 2025-09-17 00:21:31.902870 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-17 00:21:31.902881 | orchestrator | Wednesday 17 September 2025 00:21:23 +0000 (0:00:03.320) 0:00:03.431 *** 2025-09-17 00:21:31.902892 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:21:31.902902 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:21:31.902913 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:21:31.902924 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:21:31.902935 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:21:31.902945 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:21:31.902956 | orchestrator | 2025-09-17 00:21:31.902967 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-17 00:21:31.902977 | orchestrator | 2025-09-17 00:21:31.902988 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-17 00:21:31.902999 | orchestrator | Wednesday 17 September 2025 00:21:24 +0000 (0:00:00.726) 0:00:04.157 *** 2025-09-17 00:21:31.903010 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:21:31.903020 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:21:31.903031 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:21:31.903041 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:21:31.903052 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:21:31.903063 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:21:31.903096 | orchestrator | 2025-09-17 00:21:31.903110 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-17 00:21:31.903123 | orchestrator | Wednesday 17 September 2025 00:21:24 +0000 (0:00:00.175) 0:00:04.333 *** 2025-09-17 00:21:31.903135 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:21:31.903147 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:21:31.903160 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:21:31.903172 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:21:31.903184 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:21:31.903196 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:21:31.903208 | orchestrator | 2025-09-17 00:21:31.903221 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-17 00:21:31.903234 | orchestrator | Wednesday 17 September 2025 00:21:24 +0000 (0:00:00.160) 0:00:04.493 *** 2025-09-17 00:21:31.903246 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:21:31.903259 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:21:31.903271 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:21:31.903283 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:21:31.903296 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:21:31.903309 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:21:31.903321 | orchestrator | 2025-09-17 00:21:31.903334 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-17 00:21:31.903346 | orchestrator | Wednesday 17 September 2025 00:21:25 +0000 (0:00:00.634) 0:00:05.128 *** 2025-09-17 00:21:31.903359 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:21:31.903371 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:21:31.903384 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:21:31.903396 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:21:31.903409 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:21:31.903421 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:21:31.903433 | orchestrator | 2025-09-17 00:21:31.903446 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-17 00:21:31.903457 | orchestrator | Wednesday 17 September 2025 00:21:25 +0000 (0:00:00.866) 0:00:05.994 *** 2025-09-17 00:21:31.903468 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-17 00:21:31.903479 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-17 00:21:31.903490 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-17 00:21:31.903501 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-17 00:21:31.903512 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-17 00:21:31.903522 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-17 00:21:31.903533 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-17 00:21:31.903544 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-17 00:21:31.903555 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-17 00:21:31.903565 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-17 00:21:31.903576 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-17 00:21:31.903587 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-17 00:21:31.903598 | orchestrator | 2025-09-17 00:21:31.903609 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-17 00:21:31.903620 | orchestrator | Wednesday 17 September 2025 00:21:27 +0000 (0:00:01.221) 0:00:07.216 *** 2025-09-17 00:21:31.903630 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:21:31.903641 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:21:31.903652 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:21:31.903691 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:21:31.903703 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:21:31.903714 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:21:31.903724 | orchestrator | 2025-09-17 00:21:31.903736 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-17 00:21:31.903748 | orchestrator | Wednesday 17 September 2025 00:21:28 +0000 (0:00:01.312) 0:00:08.528 *** 2025-09-17 00:21:31.903759 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-17 00:21:31.903778 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-17 00:21:31.903790 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-17 00:21:31.903801 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 00:21:31.903827 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 00:21:31.903839 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 00:21:31.903850 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 00:21:31.903861 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 00:21:31.903871 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 00:21:31.903882 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-17 00:21:31.903893 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-17 00:21:31.903904 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-17 00:21:31.903915 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-17 00:21:31.903925 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-17 00:21:31.903936 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-17 00:21:31.903947 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-17 00:21:31.903958 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-17 00:21:31.903969 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-17 00:21:31.903979 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-17 00:21:31.903990 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-17 00:21:31.904001 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-17 00:21:31.904012 | orchestrator | 2025-09-17 00:21:31.904023 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-17 00:21:31.904034 | orchestrator | Wednesday 17 September 2025 00:21:29 +0000 (0:00:01.333) 0:00:09.862 *** 2025-09-17 00:21:31.904045 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:21:31.904056 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:21:31.904066 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:21:31.904077 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:21:31.904088 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:21:31.904098 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:21:31.904109 | orchestrator | 2025-09-17 00:21:31.904120 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-17 00:21:31.904131 | orchestrator | Wednesday 17 September 2025 00:21:29 +0000 (0:00:00.175) 0:00:10.038 *** 2025-09-17 00:21:31.904142 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:21:31.904152 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:21:31.904163 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:21:31.904174 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:21:31.904185 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:21:31.904195 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:21:31.904206 | orchestrator | 2025-09-17 00:21:31.904217 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-17 00:21:31.904228 | orchestrator | Wednesday 17 September 2025 00:21:30 +0000 (0:00:00.572) 0:00:10.610 *** 2025-09-17 00:21:31.904239 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:21:31.904250 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:21:31.904260 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:21:31.904271 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:21:31.904282 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:21:31.904293 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:21:31.904303 | orchestrator | 2025-09-17 00:21:31.904322 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-17 00:21:31.904333 | orchestrator | Wednesday 17 September 2025 00:21:30 +0000 (0:00:00.183) 0:00:10.794 *** 2025-09-17 00:21:31.904344 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 00:21:31.904360 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 00:21:31.904371 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-17 00:21:31.904382 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 00:21:31.904393 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-17 00:21:31.904404 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:21:31.904415 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:21:31.904425 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:21:31.904436 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:21:31.904447 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:21:31.904458 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 00:21:31.904469 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:21:31.904479 | orchestrator | 2025-09-17 00:21:31.904491 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-17 00:21:31.904501 | orchestrator | Wednesday 17 September 2025 00:21:31 +0000 (0:00:00.722) 0:00:11.517 *** 2025-09-17 00:21:31.904512 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:21:31.904523 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:21:31.904534 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:21:31.904544 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:21:31.904555 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:21:31.904566 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:21:31.904577 | orchestrator | 2025-09-17 00:21:31.904588 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-17 00:21:31.904605 | orchestrator | Wednesday 17 September 2025 00:21:31 +0000 (0:00:00.174) 0:00:11.691 *** 2025-09-17 00:21:31.904616 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:21:31.904627 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:21:31.904638 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:21:31.904649 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:21:31.904659 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:21:31.904687 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:21:31.904698 | orchestrator | 2025-09-17 00:21:31.904708 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-17 00:21:31.904720 | orchestrator | Wednesday 17 September 2025 00:21:31 +0000 (0:00:00.160) 0:00:11.852 *** 2025-09-17 00:21:31.904735 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:21:31.904746 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:21:31.904757 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:21:31.904768 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:21:31.904786 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:21:33.120872 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:21:33.120994 | orchestrator | 2025-09-17 00:21:33.121020 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-17 00:21:33.121039 | orchestrator | Wednesday 17 September 2025 00:21:31 +0000 (0:00:00.148) 0:00:12.000 *** 2025-09-17 00:21:33.121066 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:21:33.121083 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:21:33.121102 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:21:33.121120 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:21:33.121136 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:21:33.121146 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:21:33.121156 | orchestrator | 2025-09-17 00:21:33.121167 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-17 00:21:33.121177 | orchestrator | Wednesday 17 September 2025 00:21:32 +0000 (0:00:00.705) 0:00:12.705 *** 2025-09-17 00:21:33.121186 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:21:33.121196 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:21:33.121205 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:21:33.121241 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:21:33.121251 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:21:33.121260 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:21:33.121269 | orchestrator | 2025-09-17 00:21:33.121279 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:21:33.121290 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:21:33.121302 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:21:33.121312 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:21:33.121321 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:21:33.121331 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:21:33.121340 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:21:33.121349 | orchestrator | 2025-09-17 00:21:33.121359 | orchestrator | 2025-09-17 00:21:33.121369 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:21:33.121378 | orchestrator | Wednesday 17 September 2025 00:21:32 +0000 (0:00:00.235) 0:00:12.940 *** 2025-09-17 00:21:33.121388 | orchestrator | =============================================================================== 2025-09-17 00:21:33.121397 | orchestrator | Gathering Facts --------------------------------------------------------- 3.32s 2025-09-17 00:21:33.121412 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.33s 2025-09-17 00:21:33.121429 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.31s 2025-09-17 00:21:33.121445 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2025-09-17 00:21:33.121461 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2025-09-17 00:21:33.121478 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2025-09-17 00:21:33.121495 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-09-17 00:21:33.121512 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2025-09-17 00:21:33.121524 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2025-09-17 00:21:33.121534 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-09-17 00:21:33.121544 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-09-17 00:21:33.121553 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-09-17 00:21:33.121562 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-09-17 00:21:33.121572 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2025-09-17 00:21:33.121581 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-09-17 00:21:33.121590 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-09-17 00:21:33.121600 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-09-17 00:21:33.121609 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-09-17 00:21:33.381501 | orchestrator | + osism apply --environment custom facts 2025-09-17 00:21:35.247560 | orchestrator | 2025-09-17 00:21:35 | INFO  | Trying to run play facts in environment custom 2025-09-17 00:21:45.340242 | orchestrator | 2025-09-17 00:21:45 | INFO  | Task b8dc8122-aaa3-4f02-ad7d-423bcba334c9 (facts) was prepared for execution. 2025-09-17 00:21:45.340362 | orchestrator | 2025-09-17 00:21:45 | INFO  | It takes a moment until task b8dc8122-aaa3-4f02-ad7d-423bcba334c9 (facts) has been started and output is visible here. 2025-09-17 00:22:30.345482 | orchestrator | 2025-09-17 00:22:30.345623 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-17 00:22:30.345641 | orchestrator | 2025-09-17 00:22:30.345653 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-17 00:22:30.345665 | orchestrator | Wednesday 17 September 2025 00:21:49 +0000 (0:00:00.086) 0:00:00.086 *** 2025-09-17 00:22:30.345727 | orchestrator | ok: [testbed-manager] 2025-09-17 00:22:30.345741 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:22:30.345753 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:22:30.345764 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:22:30.345775 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:22:30.345785 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:22:30.345796 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:22:30.345807 | orchestrator | 2025-09-17 00:22:30.345818 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-17 00:22:30.345829 | orchestrator | Wednesday 17 September 2025 00:21:50 +0000 (0:00:01.404) 0:00:01.491 *** 2025-09-17 00:22:30.345839 | orchestrator | ok: [testbed-manager] 2025-09-17 00:22:30.345850 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:22:30.345860 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:22:30.345871 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:22:30.345882 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:22:30.345892 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:22:30.345903 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:22:30.345913 | orchestrator | 2025-09-17 00:22:30.345924 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-17 00:22:30.345935 | orchestrator | 2025-09-17 00:22:30.345945 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-17 00:22:30.345956 | orchestrator | Wednesday 17 September 2025 00:21:51 +0000 (0:00:01.146) 0:00:02.637 *** 2025-09-17 00:22:30.345967 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:30.345978 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:30.345988 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:30.346000 | orchestrator | 2025-09-17 00:22:30.346077 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-17 00:22:30.346093 | orchestrator | Wednesday 17 September 2025 00:21:51 +0000 (0:00:00.096) 0:00:02.733 *** 2025-09-17 00:22:30.346105 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:30.346118 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:30.346130 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:30.346143 | orchestrator | 2025-09-17 00:22:30.346155 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-17 00:22:30.346167 | orchestrator | Wednesday 17 September 2025 00:21:51 +0000 (0:00:00.178) 0:00:02.911 *** 2025-09-17 00:22:30.346179 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:30.346191 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:30.346203 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:30.346215 | orchestrator | 2025-09-17 00:22:30.346228 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-17 00:22:30.346240 | orchestrator | Wednesday 17 September 2025 00:21:52 +0000 (0:00:00.179) 0:00:03.091 *** 2025-09-17 00:22:30.346254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:22:30.346269 | orchestrator | 2025-09-17 00:22:30.346281 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-17 00:22:30.346294 | orchestrator | Wednesday 17 September 2025 00:21:52 +0000 (0:00:00.122) 0:00:03.213 *** 2025-09-17 00:22:30.346334 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:30.346346 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:30.346358 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:30.346370 | orchestrator | 2025-09-17 00:22:30.346381 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-17 00:22:30.346392 | orchestrator | Wednesday 17 September 2025 00:21:52 +0000 (0:00:00.448) 0:00:03.662 *** 2025-09-17 00:22:30.346402 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:22:30.346413 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:22:30.346423 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:22:30.346434 | orchestrator | 2025-09-17 00:22:30.346444 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-17 00:22:30.346455 | orchestrator | Wednesday 17 September 2025 00:21:52 +0000 (0:00:00.122) 0:00:03.785 *** 2025-09-17 00:22:30.346466 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:22:30.346476 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:22:30.346487 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:22:30.346497 | orchestrator | 2025-09-17 00:22:30.346508 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-17 00:22:30.346518 | orchestrator | Wednesday 17 September 2025 00:21:53 +0000 (0:00:01.139) 0:00:04.925 *** 2025-09-17 00:22:30.346529 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:30.346539 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:30.346550 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:30.346560 | orchestrator | 2025-09-17 00:22:30.346571 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-17 00:22:30.346583 | orchestrator | Wednesday 17 September 2025 00:21:54 +0000 (0:00:00.459) 0:00:05.384 *** 2025-09-17 00:22:30.346593 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:22:30.346604 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:22:30.346614 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:22:30.346625 | orchestrator | 2025-09-17 00:22:30.346636 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-17 00:22:30.346646 | orchestrator | Wednesday 17 September 2025 00:21:55 +0000 (0:00:01.070) 0:00:06.455 *** 2025-09-17 00:22:30.346694 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:22:30.346707 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:22:30.346717 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:22:30.346728 | orchestrator | 2025-09-17 00:22:30.346739 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-17 00:22:30.346749 | orchestrator | Wednesday 17 September 2025 00:22:13 +0000 (0:00:17.912) 0:00:24.368 *** 2025-09-17 00:22:30.346760 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:22:30.346771 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:22:30.346786 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:22:30.346797 | orchestrator | 2025-09-17 00:22:30.346808 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-17 00:22:30.346840 | orchestrator | Wednesday 17 September 2025 00:22:13 +0000 (0:00:00.102) 0:00:24.470 *** 2025-09-17 00:22:30.346852 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:22:30.346862 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:22:30.346873 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:22:30.346883 | orchestrator | 2025-09-17 00:22:30.346894 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-17 00:22:30.346905 | orchestrator | Wednesday 17 September 2025 00:22:21 +0000 (0:00:08.101) 0:00:32.572 *** 2025-09-17 00:22:30.346915 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:30.346926 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:30.346936 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:30.346947 | orchestrator | 2025-09-17 00:22:30.346958 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-17 00:22:30.346969 | orchestrator | Wednesday 17 September 2025 00:22:22 +0000 (0:00:00.418) 0:00:32.991 *** 2025-09-17 00:22:30.346979 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-17 00:22:30.347000 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-17 00:22:30.347010 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-17 00:22:30.347021 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-17 00:22:30.347031 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-17 00:22:30.347042 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-17 00:22:30.347052 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-17 00:22:30.347063 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-17 00:22:30.347073 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-17 00:22:30.347084 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-17 00:22:30.347094 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-17 00:22:30.347105 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-17 00:22:30.347115 | orchestrator | 2025-09-17 00:22:30.347126 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-17 00:22:30.347137 | orchestrator | Wednesday 17 September 2025 00:22:25 +0000 (0:00:03.422) 0:00:36.413 *** 2025-09-17 00:22:30.347147 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:30.347158 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:30.347168 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:30.347179 | orchestrator | 2025-09-17 00:22:30.347190 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 00:22:30.347200 | orchestrator | 2025-09-17 00:22:30.347211 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 00:22:30.347222 | orchestrator | Wednesday 17 September 2025 00:22:26 +0000 (0:00:01.017) 0:00:37.431 *** 2025-09-17 00:22:30.347232 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:22:30.347243 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:22:30.347253 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:22:30.347264 | orchestrator | ok: [testbed-manager] 2025-09-17 00:22:30.347274 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:30.347285 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:30.347295 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:30.347306 | orchestrator | 2025-09-17 00:22:30.347316 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:22:30.347328 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:22:30.347340 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:22:30.347352 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:22:30.347363 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:22:30.347373 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:22:30.347385 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:22:30.347395 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:22:30.347406 | orchestrator | 2025-09-17 00:22:30.347416 | orchestrator | 2025-09-17 00:22:30.347427 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:22:30.347438 | orchestrator | Wednesday 17 September 2025 00:22:30 +0000 (0:00:03.830) 0:00:41.261 *** 2025-09-17 00:22:30.347449 | orchestrator | =============================================================================== 2025-09-17 00:22:30.347467 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.91s 2025-09-17 00:22:30.347478 | orchestrator | Install required packages (Debian) -------------------------------------- 8.10s 2025-09-17 00:22:30.347488 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.83s 2025-09-17 00:22:30.347499 | orchestrator | Copy fact files --------------------------------------------------------- 3.42s 2025-09-17 00:22:30.347515 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2025-09-17 00:22:30.347525 | orchestrator | Copy fact file ---------------------------------------------------------- 1.15s 2025-09-17 00:22:30.347542 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.14s 2025-09-17 00:22:30.570872 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-09-17 00:22:30.570976 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.02s 2025-09-17 00:22:30.570988 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-09-17 00:22:30.570999 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-09-17 00:22:30.571009 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-09-17 00:22:30.571018 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2025-09-17 00:22:30.571028 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2025-09-17 00:22:30.571038 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-09-17 00:22:30.571047 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-09-17 00:22:30.571059 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-09-17 00:22:30.571068 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-09-17 00:22:30.848156 | orchestrator | + osism apply bootstrap 2025-09-17 00:22:42.887377 | orchestrator | 2025-09-17 00:22:42 | INFO  | Task 4ea2e777-fc12-4e1f-90cf-9045c142da2d (bootstrap) was prepared for execution. 2025-09-17 00:22:42.887517 | orchestrator | 2025-09-17 00:22:42 | INFO  | It takes a moment until task 4ea2e777-fc12-4e1f-90cf-9045c142da2d (bootstrap) has been started and output is visible here. 2025-09-17 00:22:58.418150 | orchestrator | 2025-09-17 00:22:58.418293 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-17 00:22:58.418310 | orchestrator | 2025-09-17 00:22:58.418321 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-17 00:22:58.418333 | orchestrator | Wednesday 17 September 2025 00:22:46 +0000 (0:00:00.121) 0:00:00.121 *** 2025-09-17 00:22:58.418344 | orchestrator | ok: [testbed-manager] 2025-09-17 00:22:58.418357 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:58.418368 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:58.418379 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:58.418389 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:22:58.418400 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:22:58.418410 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:22:58.418420 | orchestrator | 2025-09-17 00:22:58.418431 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 00:22:58.418442 | orchestrator | 2025-09-17 00:22:58.418453 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 00:22:58.418463 | orchestrator | Wednesday 17 September 2025 00:22:46 +0000 (0:00:00.200) 0:00:00.321 *** 2025-09-17 00:22:58.418474 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:22:58.418484 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:22:58.418495 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:22:58.418506 | orchestrator | ok: [testbed-manager] 2025-09-17 00:22:58.418516 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:58.418527 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:58.418537 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:58.418574 | orchestrator | 2025-09-17 00:22:58.418586 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-17 00:22:58.418596 | orchestrator | 2025-09-17 00:22:58.418607 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 00:22:58.418620 | orchestrator | Wednesday 17 September 2025 00:22:50 +0000 (0:00:03.790) 0:00:04.112 *** 2025-09-17 00:22:58.418634 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-17 00:22:58.418647 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-17 00:22:58.418659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-17 00:22:58.418671 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-17 00:22:58.418684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:22:58.418718 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-17 00:22:58.418731 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-17 00:22:58.418742 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-17 00:22:58.418754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:22:58.418766 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-17 00:22:58.418778 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-17 00:22:58.418791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:22:58.418803 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-17 00:22:58.418815 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-17 00:22:58.418827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-17 00:22:58.418839 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-17 00:22:58.418851 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-17 00:22:58.418863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-17 00:22:58.418875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-17 00:22:58.418888 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:22:58.418899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-17 00:22:58.418911 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:22:58.418923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-17 00:22:58.418936 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-17 00:22:58.418948 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-17 00:22:58.418960 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-17 00:22:58.418972 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-17 00:22:58.418982 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-17 00:22:58.418993 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-17 00:22:58.419003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-17 00:22:58.419014 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-17 00:22:58.419024 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-17 00:22:58.419035 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-17 00:22:58.419045 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-17 00:22:58.419055 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:22:58.419066 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:22:58.419095 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-17 00:22:58.419107 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-17 00:22:58.419117 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-17 00:22:58.419128 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-17 00:22:58.419138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-17 00:22:58.419158 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-17 00:22:58.419169 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-17 00:22:58.419180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 00:22:58.419191 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-17 00:22:58.419202 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-17 00:22:58.419229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 00:22:58.419241 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-17 00:22:58.419252 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-17 00:22:58.419262 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:22:58.419273 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 00:22:58.419283 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:22:58.419294 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-17 00:22:58.419304 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-17 00:22:58.419315 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-17 00:22:58.419325 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:22:58.419336 | orchestrator | 2025-09-17 00:22:58.419346 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-17 00:22:58.419357 | orchestrator | 2025-09-17 00:22:58.419367 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-17 00:22:58.419378 | orchestrator | Wednesday 17 September 2025 00:22:51 +0000 (0:00:00.438) 0:00:04.550 *** 2025-09-17 00:22:58.419388 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:58.419399 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:22:58.419409 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:58.419420 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:22:58.419430 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:22:58.419440 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:58.419450 | orchestrator | ok: [testbed-manager] 2025-09-17 00:22:58.419461 | orchestrator | 2025-09-17 00:22:58.419471 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-17 00:22:58.419482 | orchestrator | Wednesday 17 September 2025 00:22:52 +0000 (0:00:01.132) 0:00:05.682 *** 2025-09-17 00:22:58.419492 | orchestrator | ok: [testbed-manager] 2025-09-17 00:22:58.419503 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:22:58.419513 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:22:58.419524 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:22:58.419534 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:22:58.419544 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:22:58.419555 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:22:58.419565 | orchestrator | 2025-09-17 00:22:58.419575 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-17 00:22:58.419586 | orchestrator | Wednesday 17 September 2025 00:22:53 +0000 (0:00:01.291) 0:00:06.974 *** 2025-09-17 00:22:58.419598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:22:58.419612 | orchestrator | 2025-09-17 00:22:58.419622 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-17 00:22:58.419633 | orchestrator | Wednesday 17 September 2025 00:22:53 +0000 (0:00:00.279) 0:00:07.253 *** 2025-09-17 00:22:58.419643 | orchestrator | changed: [testbed-manager] 2025-09-17 00:22:58.419654 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:22:58.419664 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:22:58.419675 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:22:58.419708 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:22:58.419719 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:22:58.419730 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:22:58.419740 | orchestrator | 2025-09-17 00:22:58.419759 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-17 00:22:58.419770 | orchestrator | Wednesday 17 September 2025 00:22:56 +0000 (0:00:02.142) 0:00:09.395 *** 2025-09-17 00:22:58.419780 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:22:58.419793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:22:58.419805 | orchestrator | 2025-09-17 00:22:58.419820 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-17 00:22:58.419831 | orchestrator | Wednesday 17 September 2025 00:22:56 +0000 (0:00:00.296) 0:00:09.692 *** 2025-09-17 00:22:58.419842 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:22:58.419852 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:22:58.419863 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:22:58.419873 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:22:58.419883 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:22:58.419894 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:22:58.419904 | orchestrator | 2025-09-17 00:22:58.419914 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-17 00:22:58.419925 | orchestrator | Wednesday 17 September 2025 00:22:57 +0000 (0:00:00.976) 0:00:10.669 *** 2025-09-17 00:22:58.419935 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:22:58.419946 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:22:58.419956 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:22:58.419966 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:22:58.419977 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:22:58.419987 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:22:58.419997 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:22:58.420008 | orchestrator | 2025-09-17 00:22:58.420018 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-17 00:22:58.420029 | orchestrator | Wednesday 17 September 2025 00:22:57 +0000 (0:00:00.552) 0:00:11.221 *** 2025-09-17 00:22:58.420039 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:22:58.420049 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:22:58.420060 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:22:58.420070 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:22:58.420080 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:22:58.420091 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:22:58.420101 | orchestrator | ok: [testbed-manager] 2025-09-17 00:22:58.420111 | orchestrator | 2025-09-17 00:22:58.420122 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-17 00:22:58.420134 | orchestrator | Wednesday 17 September 2025 00:22:58 +0000 (0:00:00.410) 0:00:11.631 *** 2025-09-17 00:22:58.420144 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:22:58.420155 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:22:58.420172 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:23:10.246254 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:23:10.246374 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:23:10.246388 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:23:10.246398 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:23:10.246408 | orchestrator | 2025-09-17 00:23:10.246419 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-17 00:23:10.246431 | orchestrator | Wednesday 17 September 2025 00:22:58 +0000 (0:00:00.208) 0:00:11.840 *** 2025-09-17 00:23:10.246443 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:23:10.246470 | orchestrator | 2025-09-17 00:23:10.246480 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-17 00:23:10.246491 | orchestrator | Wednesday 17 September 2025 00:22:58 +0000 (0:00:00.277) 0:00:12.117 *** 2025-09-17 00:23:10.246522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:23:10.246533 | orchestrator | 2025-09-17 00:23:10.246542 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-17 00:23:10.246552 | orchestrator | Wednesday 17 September 2025 00:22:59 +0000 (0:00:00.298) 0:00:12.416 *** 2025-09-17 00:23:10.246562 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.246572 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:10.246581 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:10.246591 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:10.246600 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:10.246609 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:10.246618 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:10.246627 | orchestrator | 2025-09-17 00:23:10.246637 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-17 00:23:10.246646 | orchestrator | Wednesday 17 September 2025 00:23:00 +0000 (0:00:01.235) 0:00:13.652 *** 2025-09-17 00:23:10.246656 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:23:10.246665 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:23:10.246674 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:23:10.246684 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:23:10.246717 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:23:10.246727 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:23:10.246736 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:23:10.246745 | orchestrator | 2025-09-17 00:23:10.246755 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-17 00:23:10.246764 | orchestrator | Wednesday 17 September 2025 00:23:00 +0000 (0:00:00.224) 0:00:13.876 *** 2025-09-17 00:23:10.246774 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.246783 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:10.246792 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:10.246802 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:10.246811 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:10.246820 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:10.246830 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:10.246839 | orchestrator | 2025-09-17 00:23:10.246849 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-17 00:23:10.246858 | orchestrator | Wednesday 17 September 2025 00:23:01 +0000 (0:00:00.537) 0:00:14.414 *** 2025-09-17 00:23:10.246867 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:23:10.246877 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:23:10.246887 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:23:10.246896 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:23:10.246906 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:23:10.246915 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:23:10.246924 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:23:10.246933 | orchestrator | 2025-09-17 00:23:10.246944 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-17 00:23:10.246955 | orchestrator | Wednesday 17 September 2025 00:23:01 +0000 (0:00:00.318) 0:00:14.732 *** 2025-09-17 00:23:10.246964 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.246973 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:23:10.246983 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:23:10.246992 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:23:10.247001 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:23:10.247011 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:23:10.247020 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:23:10.247029 | orchestrator | 2025-09-17 00:23:10.247039 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-17 00:23:10.247048 | orchestrator | Wednesday 17 September 2025 00:23:01 +0000 (0:00:00.557) 0:00:15.289 *** 2025-09-17 00:23:10.247065 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.247075 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:23:10.247084 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:23:10.247093 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:23:10.247103 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:23:10.247112 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:23:10.247121 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:23:10.247131 | orchestrator | 2025-09-17 00:23:10.247140 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-17 00:23:10.247150 | orchestrator | Wednesday 17 September 2025 00:23:02 +0000 (0:00:01.023) 0:00:16.313 *** 2025-09-17 00:23:10.247159 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.247169 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:10.247178 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:10.247187 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:10.247197 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:10.247207 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:10.247216 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:10.247225 | orchestrator | 2025-09-17 00:23:10.247235 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-17 00:23:10.247245 | orchestrator | Wednesday 17 September 2025 00:23:04 +0000 (0:00:01.140) 0:00:17.453 *** 2025-09-17 00:23:10.247271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:23:10.247282 | orchestrator | 2025-09-17 00:23:10.247291 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-17 00:23:10.247301 | orchestrator | Wednesday 17 September 2025 00:23:04 +0000 (0:00:00.382) 0:00:17.836 *** 2025-09-17 00:23:10.247310 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:23:10.247320 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:23:10.247329 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:23:10.247339 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:23:10.247348 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:23:10.247358 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:23:10.247367 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:23:10.247376 | orchestrator | 2025-09-17 00:23:10.247386 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-17 00:23:10.247395 | orchestrator | Wednesday 17 September 2025 00:23:05 +0000 (0:00:01.366) 0:00:19.202 *** 2025-09-17 00:23:10.247405 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.247414 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:10.247424 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:10.247433 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:10.247442 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:10.247452 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:10.247461 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:10.247471 | orchestrator | 2025-09-17 00:23:10.247480 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-17 00:23:10.247490 | orchestrator | Wednesday 17 September 2025 00:23:06 +0000 (0:00:00.203) 0:00:19.406 *** 2025-09-17 00:23:10.247499 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.247509 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:10.247518 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:10.247527 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:10.247537 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:10.247546 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:10.247555 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:10.247565 | orchestrator | 2025-09-17 00:23:10.247574 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-17 00:23:10.247584 | orchestrator | Wednesday 17 September 2025 00:23:06 +0000 (0:00:00.199) 0:00:19.605 *** 2025-09-17 00:23:10.247594 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.247641 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:10.247658 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:10.247667 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:10.247677 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:10.247686 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:10.247711 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:10.247721 | orchestrator | 2025-09-17 00:23:10.247731 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-17 00:23:10.247740 | orchestrator | Wednesday 17 September 2025 00:23:06 +0000 (0:00:00.200) 0:00:19.805 *** 2025-09-17 00:23:10.247751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:23:10.247762 | orchestrator | 2025-09-17 00:23:10.247772 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-17 00:23:10.247781 | orchestrator | Wednesday 17 September 2025 00:23:06 +0000 (0:00:00.260) 0:00:20.066 *** 2025-09-17 00:23:10.247791 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.247800 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:10.247809 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:10.247819 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:10.247828 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:10.247838 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:10.247847 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:10.247856 | orchestrator | 2025-09-17 00:23:10.247870 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-17 00:23:10.247880 | orchestrator | Wednesday 17 September 2025 00:23:07 +0000 (0:00:00.508) 0:00:20.574 *** 2025-09-17 00:23:10.247889 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:23:10.247899 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:23:10.247908 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:23:10.247917 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:23:10.247927 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:23:10.247936 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:23:10.247946 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:23:10.247955 | orchestrator | 2025-09-17 00:23:10.247964 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-17 00:23:10.247974 | orchestrator | Wednesday 17 September 2025 00:23:07 +0000 (0:00:00.201) 0:00:20.776 *** 2025-09-17 00:23:10.247983 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.247993 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:10.248002 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:10.248011 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:10.248021 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:23:10.248030 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:23:10.248040 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:23:10.248049 | orchestrator | 2025-09-17 00:23:10.248059 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-17 00:23:10.248068 | orchestrator | Wednesday 17 September 2025 00:23:08 +0000 (0:00:01.055) 0:00:21.831 *** 2025-09-17 00:23:10.248078 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.248087 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:10.248097 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:10.248106 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:10.248115 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:10.248125 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:10.248134 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:10.248143 | orchestrator | 2025-09-17 00:23:10.248153 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-17 00:23:10.248162 | orchestrator | Wednesday 17 September 2025 00:23:09 +0000 (0:00:00.566) 0:00:22.398 *** 2025-09-17 00:23:10.248172 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:10.248181 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:10.248190 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:10.248200 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:10.248222 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:23:49.818954 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:23:49.819078 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:23:49.819095 | orchestrator | 2025-09-17 00:23:49.819108 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-17 00:23:49.819120 | orchestrator | Wednesday 17 September 2025 00:23:10 +0000 (0:00:01.179) 0:00:23.578 *** 2025-09-17 00:23:49.819131 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.819143 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.819153 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.819164 | orchestrator | changed: [testbed-manager] 2025-09-17 00:23:49.819175 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:23:49.819186 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:23:49.819197 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:23:49.819207 | orchestrator | 2025-09-17 00:23:49.819218 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-17 00:23:49.819229 | orchestrator | Wednesday 17 September 2025 00:23:27 +0000 (0:00:17.638) 0:00:41.216 *** 2025-09-17 00:23:49.819240 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.819251 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.819261 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.819272 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.819283 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.819293 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.819304 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.819314 | orchestrator | 2025-09-17 00:23:49.819325 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-17 00:23:49.819336 | orchestrator | Wednesday 17 September 2025 00:23:28 +0000 (0:00:00.248) 0:00:41.465 *** 2025-09-17 00:23:49.819347 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.819358 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.819368 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.819379 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.819389 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.819400 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.819411 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.819421 | orchestrator | 2025-09-17 00:23:49.819432 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-17 00:23:49.819443 | orchestrator | Wednesday 17 September 2025 00:23:28 +0000 (0:00:00.236) 0:00:41.701 *** 2025-09-17 00:23:49.819454 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.819465 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.819478 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.819491 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.819503 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.819515 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.819528 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.819540 | orchestrator | 2025-09-17 00:23:49.819553 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-17 00:23:49.819566 | orchestrator | Wednesday 17 September 2025 00:23:28 +0000 (0:00:00.223) 0:00:41.924 *** 2025-09-17 00:23:49.819579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:23:49.819592 | orchestrator | 2025-09-17 00:23:49.819603 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-17 00:23:49.819614 | orchestrator | Wednesday 17 September 2025 00:23:28 +0000 (0:00:00.282) 0:00:42.207 *** 2025-09-17 00:23:49.819625 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.819635 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.819646 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.819657 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.819667 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.819678 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.819735 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.819747 | orchestrator | 2025-09-17 00:23:49.819758 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-17 00:23:49.819769 | orchestrator | Wednesday 17 September 2025 00:23:30 +0000 (0:00:01.495) 0:00:43.702 *** 2025-09-17 00:23:49.819795 | orchestrator | changed: [testbed-manager] 2025-09-17 00:23:49.819806 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:23:49.819817 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:23:49.819827 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:23:49.819838 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:23:49.819849 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:23:49.819859 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:23:49.819870 | orchestrator | 2025-09-17 00:23:49.819880 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-17 00:23:49.819891 | orchestrator | Wednesday 17 September 2025 00:23:31 +0000 (0:00:01.055) 0:00:44.757 *** 2025-09-17 00:23:49.819903 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.819914 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.819924 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.819935 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.819945 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.819956 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.819967 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.819977 | orchestrator | 2025-09-17 00:23:49.819988 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-17 00:23:49.819999 | orchestrator | Wednesday 17 September 2025 00:23:32 +0000 (0:00:00.854) 0:00:45.612 *** 2025-09-17 00:23:49.820010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:23:49.820023 | orchestrator | 2025-09-17 00:23:49.820033 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-17 00:23:49.820045 | orchestrator | Wednesday 17 September 2025 00:23:32 +0000 (0:00:00.326) 0:00:45.939 *** 2025-09-17 00:23:49.820055 | orchestrator | changed: [testbed-manager] 2025-09-17 00:23:49.820066 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:23:49.820076 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:23:49.820087 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:23:49.820098 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:23:49.820108 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:23:49.820118 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:23:49.820129 | orchestrator | 2025-09-17 00:23:49.820155 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-17 00:23:49.820167 | orchestrator | Wednesday 17 September 2025 00:23:33 +0000 (0:00:01.047) 0:00:46.987 *** 2025-09-17 00:23:49.820211 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:23:49.820222 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:23:49.820232 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:23:49.820243 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:23:49.820253 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:23:49.820264 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:23:49.820274 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:23:49.820285 | orchestrator | 2025-09-17 00:23:49.820295 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-17 00:23:49.820306 | orchestrator | Wednesday 17 September 2025 00:23:33 +0000 (0:00:00.238) 0:00:47.225 *** 2025-09-17 00:23:49.820317 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:23:49.820327 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:23:49.820338 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:23:49.820348 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:23:49.820359 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:23:49.820369 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:23:49.820380 | orchestrator | changed: [testbed-manager] 2025-09-17 00:23:49.820399 | orchestrator | 2025-09-17 00:23:49.820410 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-17 00:23:49.820421 | orchestrator | Wednesday 17 September 2025 00:23:44 +0000 (0:00:10.987) 0:00:58.213 *** 2025-09-17 00:23:49.820431 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.820442 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.820452 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.820463 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.820473 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.820484 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.820494 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.820505 | orchestrator | 2025-09-17 00:23:49.820515 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-17 00:23:49.820526 | orchestrator | Wednesday 17 September 2025 00:23:45 +0000 (0:00:00.788) 0:00:59.002 *** 2025-09-17 00:23:49.820537 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.820547 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.820558 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.820568 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.820578 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.820589 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.820599 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.820609 | orchestrator | 2025-09-17 00:23:49.820620 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-17 00:23:49.820630 | orchestrator | Wednesday 17 September 2025 00:23:46 +0000 (0:00:00.911) 0:00:59.913 *** 2025-09-17 00:23:49.820641 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.820651 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.820662 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.820672 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.820683 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.820693 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.820735 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.820746 | orchestrator | 2025-09-17 00:23:49.820757 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-17 00:23:49.820768 | orchestrator | Wednesday 17 September 2025 00:23:46 +0000 (0:00:00.233) 0:01:00.147 *** 2025-09-17 00:23:49.820779 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.820789 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.820799 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.820810 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.820820 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.820830 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.820841 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.820851 | orchestrator | 2025-09-17 00:23:49.820862 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-17 00:23:49.820872 | orchestrator | Wednesday 17 September 2025 00:23:46 +0000 (0:00:00.208) 0:01:00.355 *** 2025-09-17 00:23:49.820884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:23:49.820895 | orchestrator | 2025-09-17 00:23:49.820906 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-17 00:23:49.820916 | orchestrator | Wednesday 17 September 2025 00:23:47 +0000 (0:00:00.275) 0:01:00.630 *** 2025-09-17 00:23:49.820927 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.820938 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.820948 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.820974 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.820985 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.821005 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.821016 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.821026 | orchestrator | 2025-09-17 00:23:49.821037 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-17 00:23:49.821048 | orchestrator | Wednesday 17 September 2025 00:23:48 +0000 (0:00:01.664) 0:01:02.295 *** 2025-09-17 00:23:49.821066 | orchestrator | changed: [testbed-manager] 2025-09-17 00:23:49.821077 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:23:49.821087 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:23:49.821098 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:23:49.821108 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:23:49.821119 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:23:49.821129 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:23:49.821140 | orchestrator | 2025-09-17 00:23:49.821151 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-17 00:23:49.821161 | orchestrator | Wednesday 17 September 2025 00:23:49 +0000 (0:00:00.563) 0:01:02.859 *** 2025-09-17 00:23:49.821172 | orchestrator | ok: [testbed-manager] 2025-09-17 00:23:49.821183 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:23:49.821193 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:23:49.821204 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:23:49.821214 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:23:49.821225 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:23:49.821235 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:23:49.821246 | orchestrator | 2025-09-17 00:23:49.821264 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-17 00:26:04.927297 | orchestrator | Wednesday 17 September 2025 00:23:49 +0000 (0:00:00.301) 0:01:03.161 *** 2025-09-17 00:26:04.927416 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:04.927432 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:04.927444 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:04.927455 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:04.927466 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:04.927476 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:04.927487 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:04.927497 | orchestrator | 2025-09-17 00:26:04.927510 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-17 00:26:04.927521 | orchestrator | Wednesday 17 September 2025 00:23:50 +0000 (0:00:01.072) 0:01:04.233 *** 2025-09-17 00:26:04.927531 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:26:04.927543 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:26:04.927554 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:26:04.927564 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:26:04.927575 | orchestrator | changed: [testbed-manager] 2025-09-17 00:26:04.927585 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:26:04.927596 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:26:04.927606 | orchestrator | 2025-09-17 00:26:04.927617 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-17 00:26:04.927628 | orchestrator | Wednesday 17 September 2025 00:23:52 +0000 (0:00:01.438) 0:01:05.671 *** 2025-09-17 00:26:04.927639 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:04.927649 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:04.927660 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:04.927670 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:04.927681 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:04.927710 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:04.927722 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:04.927732 | orchestrator | 2025-09-17 00:26:04.927778 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-17 00:26:04.927789 | orchestrator | Wednesday 17 September 2025 00:23:54 +0000 (0:00:01.989) 0:01:07.661 *** 2025-09-17 00:26:04.927800 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:04.927811 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:04.927822 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:04.927833 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:04.927846 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:04.927858 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:04.927870 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:04.927883 | orchestrator | 2025-09-17 00:26:04.927896 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-17 00:26:04.927930 | orchestrator | Wednesday 17 September 2025 00:24:34 +0000 (0:00:40.102) 0:01:47.763 *** 2025-09-17 00:26:04.927942 | orchestrator | changed: [testbed-manager] 2025-09-17 00:26:04.927954 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:26:04.927967 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:26:04.927979 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:26:04.927992 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:26:04.928004 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:26:04.928017 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:26:04.928030 | orchestrator | 2025-09-17 00:26:04.928042 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-17 00:26:04.928055 | orchestrator | Wednesday 17 September 2025 00:25:50 +0000 (0:01:16.090) 0:03:03.853 *** 2025-09-17 00:26:04.928068 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:04.928080 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:04.928094 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:04.928106 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:04.928118 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:04.928130 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:04.928142 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:04.928154 | orchestrator | 2025-09-17 00:26:04.928167 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-17 00:26:04.928181 | orchestrator | Wednesday 17 September 2025 00:25:52 +0000 (0:00:01.682) 0:03:05.536 *** 2025-09-17 00:26:04.928192 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:04.928202 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:04.928213 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:04.928228 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:04.928239 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:04.928249 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:04.928260 | orchestrator | changed: [testbed-manager] 2025-09-17 00:26:04.928270 | orchestrator | 2025-09-17 00:26:04.928281 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-17 00:26:04.928291 | orchestrator | Wednesday 17 September 2025 00:26:03 +0000 (0:00:11.488) 0:03:17.024 *** 2025-09-17 00:26:04.928310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-17 00:26:04.928332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-17 00:26:04.928369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-17 00:26:04.928382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-17 00:26:04.928401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-17 00:26:04.928412 | orchestrator | 2025-09-17 00:26:04.928423 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-17 00:26:04.928434 | orchestrator | Wednesday 17 September 2025 00:26:04 +0000 (0:00:00.391) 0:03:17.416 *** 2025-09-17 00:26:04.928445 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-17 00:26:04.928456 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:26:04.928466 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-17 00:26:04.928476 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-17 00:26:04.928487 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:26:04.928497 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:26:04.928508 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-17 00:26:04.928518 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:26:04.928529 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 00:26:04.928539 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 00:26:04.928549 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 00:26:04.928560 | orchestrator | 2025-09-17 00:26:04.928570 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-17 00:26:04.928581 | orchestrator | Wednesday 17 September 2025 00:26:04 +0000 (0:00:00.714) 0:03:18.130 *** 2025-09-17 00:26:04.928591 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-17 00:26:04.928603 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-17 00:26:04.928613 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-17 00:26:04.928624 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-17 00:26:04.928634 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-17 00:26:04.928649 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-17 00:26:04.928661 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-17 00:26:04.928671 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-17 00:26:04.928682 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-17 00:26:04.928692 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-17 00:26:04.928702 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-17 00:26:04.928713 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-17 00:26:04.928723 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-17 00:26:04.928734 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-17 00:26:04.928766 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-17 00:26:04.928777 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-17 00:26:04.928787 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-17 00:26:04.928805 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-17 00:26:04.928815 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-17 00:26:04.928826 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:26:04.928836 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-17 00:26:04.928853 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-17 00:26:13.793921 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-17 00:26:13.794097 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-17 00:26:13.794117 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-17 00:26:13.794130 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-17 00:26:13.794173 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-17 00:26:13.794185 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-17 00:26:13.794196 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-17 00:26:13.794207 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-17 00:26:13.794219 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:26:13.794231 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-17 00:26:13.794242 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-17 00:26:13.794253 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-17 00:26:13.794264 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-17 00:26:13.794274 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-17 00:26:13.794285 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-17 00:26:13.794296 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-17 00:26:13.794306 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-17 00:26:13.794317 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-17 00:26:13.794328 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-17 00:26:13.794338 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-17 00:26:13.794350 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:26:13.794362 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:26:13.794372 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-17 00:26:13.794383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-17 00:26:13.794393 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-17 00:26:13.794405 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-17 00:26:13.794418 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-17 00:26:13.794431 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-17 00:26:13.794444 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-17 00:26:13.794479 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-17 00:26:13.794492 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-17 00:26:13.794503 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-17 00:26:13.794515 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-17 00:26:13.794528 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-17 00:26:13.794539 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-17 00:26:13.794551 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-17 00:26:13.794562 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-17 00:26:13.794575 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-17 00:26:13.794587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-17 00:26:13.794599 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-17 00:26:13.794610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-17 00:26:13.794623 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-17 00:26:13.794635 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-17 00:26:13.794665 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-17 00:26:13.794679 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-17 00:26:13.794691 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-17 00:26:13.794703 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-17 00:26:13.794715 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-17 00:26:13.794727 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-17 00:26:13.794760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-17 00:26:13.794772 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-17 00:26:13.794783 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-17 00:26:13.794794 | orchestrator | 2025-09-17 00:26:13.794806 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-17 00:26:13.794816 | orchestrator | Wednesday 17 September 2025 00:26:10 +0000 (0:00:06.024) 0:03:24.155 *** 2025-09-17 00:26:13.794827 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 00:26:13.794838 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 00:26:13.794848 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 00:26:13.794859 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 00:26:13.794869 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 00:26:13.794880 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 00:26:13.794890 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 00:26:13.794901 | orchestrator | 2025-09-17 00:26:13.794911 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-17 00:26:13.794931 | orchestrator | Wednesday 17 September 2025 00:26:11 +0000 (0:00:00.598) 0:03:24.754 *** 2025-09-17 00:26:13.794942 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-17 00:26:13.794952 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:26:13.794980 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-17 00:26:13.794992 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-17 00:26:13.795003 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:26:13.795013 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:26:13.795024 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-17 00:26:13.795035 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:26:13.795045 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-17 00:26:13.795056 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-17 00:26:13.795072 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-17 00:26:13.795083 | orchestrator | 2025-09-17 00:26:13.795093 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-17 00:26:13.795104 | orchestrator | Wednesday 17 September 2025 00:26:12 +0000 (0:00:01.495) 0:03:26.249 *** 2025-09-17 00:26:13.795114 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-17 00:26:13.795125 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:26:13.795135 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-17 00:26:13.795145 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-17 00:26:13.795156 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:26:13.795166 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:26:13.795177 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-17 00:26:13.795188 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:26:13.795198 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-17 00:26:13.795209 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-17 00:26:13.795219 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-17 00:26:13.795230 | orchestrator | 2025-09-17 00:26:13.795240 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-17 00:26:13.795251 | orchestrator | Wednesday 17 September 2025 00:26:13 +0000 (0:00:00.589) 0:03:26.839 *** 2025-09-17 00:26:13.795261 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:26:13.795272 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:26:13.795283 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:26:13.795293 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:26:13.795304 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:26:13.795323 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:26:26.662062 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:26:26.662173 | orchestrator | 2025-09-17 00:26:26.662186 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-17 00:26:26.662198 | orchestrator | Wednesday 17 September 2025 00:26:13 +0000 (0:00:00.297) 0:03:27.136 *** 2025-09-17 00:26:26.662206 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:26.662216 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:26.662225 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:26.662233 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:26.662262 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:26.662271 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:26.662279 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:26.662288 | orchestrator | 2025-09-17 00:26:26.662296 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-17 00:26:26.662305 | orchestrator | Wednesday 17 September 2025 00:26:20 +0000 (0:00:06.584) 0:03:33.721 *** 2025-09-17 00:26:26.662313 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-17 00:26:26.662322 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-17 00:26:26.662330 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:26:26.662339 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:26:26.662347 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-17 00:26:26.662355 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-17 00:26:26.662364 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:26:26.662372 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-17 00:26:26.662380 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:26:26.662388 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-17 00:26:26.662397 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:26:26.662409 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:26:26.662418 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-17 00:26:26.662426 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:26:26.662434 | orchestrator | 2025-09-17 00:26:26.662443 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-17 00:26:26.662451 | orchestrator | Wednesday 17 September 2025 00:26:20 +0000 (0:00:00.326) 0:03:34.047 *** 2025-09-17 00:26:26.662459 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-17 00:26:26.662468 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-17 00:26:26.662476 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-17 00:26:26.662485 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-17 00:26:26.662493 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-17 00:26:26.662501 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-17 00:26:26.662510 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-17 00:26:26.662518 | orchestrator | 2025-09-17 00:26:26.662526 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-17 00:26:26.662535 | orchestrator | Wednesday 17 September 2025 00:26:21 +0000 (0:00:01.068) 0:03:35.116 *** 2025-09-17 00:26:26.662545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:26:26.662555 | orchestrator | 2025-09-17 00:26:26.662566 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-17 00:26:26.662575 | orchestrator | Wednesday 17 September 2025 00:26:22 +0000 (0:00:00.549) 0:03:35.665 *** 2025-09-17 00:26:26.662585 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:26.662595 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:26.662605 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:26.662614 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:26.662624 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:26.662633 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:26.662643 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:26.662652 | orchestrator | 2025-09-17 00:26:26.662674 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-17 00:26:26.662684 | orchestrator | Wednesday 17 September 2025 00:26:23 +0000 (0:00:01.466) 0:03:37.132 *** 2025-09-17 00:26:26.662694 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:26.662703 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:26.662713 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:26.662723 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:26.662732 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:26.662741 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:26.662777 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:26.662787 | orchestrator | 2025-09-17 00:26:26.662797 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-17 00:26:26.662807 | orchestrator | Wednesday 17 September 2025 00:26:24 +0000 (0:00:00.650) 0:03:37.783 *** 2025-09-17 00:26:26.662816 | orchestrator | changed: [testbed-manager] 2025-09-17 00:26:26.662826 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:26:26.662835 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:26:26.662844 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:26:26.662854 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:26:26.662863 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:26:26.662873 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:26:26.662882 | orchestrator | 2025-09-17 00:26:26.662892 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-17 00:26:26.662901 | orchestrator | Wednesday 17 September 2025 00:26:25 +0000 (0:00:00.624) 0:03:38.408 *** 2025-09-17 00:26:26.662911 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:26.662920 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:26.662928 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:26.662937 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:26.662945 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:26.662953 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:26.662962 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:26.662970 | orchestrator | 2025-09-17 00:26:26.662979 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-17 00:26:26.662987 | orchestrator | Wednesday 17 September 2025 00:26:25 +0000 (0:00:00.614) 0:03:39.022 *** 2025-09-17 00:26:26.663014 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758067442.3736, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:26.663026 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758067470.9696543, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:26.663035 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758067458.0966136, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:26.663044 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758067466.4656663, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:26.663058 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758067470.6033642, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:26.663073 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758067480.2382882, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:26.663082 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758067466.3950427, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:26.663105 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:42.987862 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:42.987986 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:42.988004 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:42.988016 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:42.988049 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:42.988061 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 00:26:42.988073 | orchestrator | 2025-09-17 00:26:42.988087 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-17 00:26:42.988099 | orchestrator | Wednesday 17 September 2025 00:26:26 +0000 (0:00:00.980) 0:03:40.002 *** 2025-09-17 00:26:42.988110 | orchestrator | changed: [testbed-manager] 2025-09-17 00:26:42.988121 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:26:42.988132 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:26:42.988142 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:26:42.988153 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:26:42.988163 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:26:42.988173 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:26:42.988184 | orchestrator | 2025-09-17 00:26:42.988195 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-17 00:26:42.988205 | orchestrator | Wednesday 17 September 2025 00:26:27 +0000 (0:00:01.141) 0:03:41.143 *** 2025-09-17 00:26:42.988216 | orchestrator | changed: [testbed-manager] 2025-09-17 00:26:42.988226 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:26:42.988237 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:26:42.988247 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:26:42.988273 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:26:42.988285 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:26:42.988296 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:26:42.988306 | orchestrator | 2025-09-17 00:26:42.988316 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-17 00:26:42.988327 | orchestrator | Wednesday 17 September 2025 00:26:29 +0000 (0:00:01.224) 0:03:42.368 *** 2025-09-17 00:26:42.988337 | orchestrator | changed: [testbed-manager] 2025-09-17 00:26:42.988365 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:26:42.988379 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:26:42.988391 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:26:42.988402 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:26:42.988414 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:26:42.988426 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:26:42.988438 | orchestrator | 2025-09-17 00:26:42.988450 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-17 00:26:42.988462 | orchestrator | Wednesday 17 September 2025 00:26:30 +0000 (0:00:01.203) 0:03:43.571 *** 2025-09-17 00:26:42.988482 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:26:42.988495 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:26:42.988507 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:26:42.988519 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:26:42.988531 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:26:42.988543 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:26:42.988555 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:26:42.988567 | orchestrator | 2025-09-17 00:26:42.988579 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-17 00:26:42.988591 | orchestrator | Wednesday 17 September 2025 00:26:30 +0000 (0:00:00.290) 0:03:43.862 *** 2025-09-17 00:26:42.988604 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:42.988616 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:42.988628 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:42.988639 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:42.988651 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:42.988662 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:42.988674 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:42.988688 | orchestrator | 2025-09-17 00:26:42.988700 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-17 00:26:42.988712 | orchestrator | Wednesday 17 September 2025 00:26:31 +0000 (0:00:00.766) 0:03:44.628 *** 2025-09-17 00:26:42.988725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:26:42.988737 | orchestrator | 2025-09-17 00:26:42.988771 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-17 00:26:42.988782 | orchestrator | Wednesday 17 September 2025 00:26:31 +0000 (0:00:00.435) 0:03:45.064 *** 2025-09-17 00:26:42.988793 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:42.988803 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:26:42.988814 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:26:42.988824 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:26:42.988835 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:26:42.988845 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:26:42.988856 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:26:42.988866 | orchestrator | 2025-09-17 00:26:42.988877 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-17 00:26:42.988887 | orchestrator | Wednesday 17 September 2025 00:26:39 +0000 (0:00:08.110) 0:03:53.174 *** 2025-09-17 00:26:42.988898 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:42.988913 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:42.988924 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:42.988935 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:42.988945 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:42.988955 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:42.988966 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:42.988976 | orchestrator | 2025-09-17 00:26:42.988987 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-17 00:26:42.988998 | orchestrator | Wednesday 17 September 2025 00:26:41 +0000 (0:00:01.194) 0:03:54.369 *** 2025-09-17 00:26:42.989008 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:42.989019 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:42.989029 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:42.989039 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:42.989050 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:42.989060 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:42.989070 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:42.989080 | orchestrator | 2025-09-17 00:26:42.989091 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-17 00:26:42.989101 | orchestrator | Wednesday 17 September 2025 00:26:42 +0000 (0:00:01.024) 0:03:55.394 *** 2025-09-17 00:26:42.989112 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:42.989129 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:42.989140 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:42.989150 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:42.989160 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:42.989171 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:42.989181 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:42.989191 | orchestrator | 2025-09-17 00:26:42.989202 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-17 00:26:42.989213 | orchestrator | Wednesday 17 September 2025 00:26:42 +0000 (0:00:00.290) 0:03:55.685 *** 2025-09-17 00:26:42.989224 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:42.989234 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:42.989244 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:42.989255 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:42.989265 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:42.989276 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:26:42.989286 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:26:42.989296 | orchestrator | 2025-09-17 00:26:42.989307 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-17 00:26:42.989317 | orchestrator | Wednesday 17 September 2025 00:26:42 +0000 (0:00:00.367) 0:03:56.053 *** 2025-09-17 00:26:42.989328 | orchestrator | ok: [testbed-manager] 2025-09-17 00:26:42.989338 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:26:42.989349 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:26:42.989359 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:26:42.989369 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:26:42.989386 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:27:51.291303 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:27:51.291447 | orchestrator | 2025-09-17 00:27:51.291465 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-17 00:27:51.291478 | orchestrator | Wednesday 17 September 2025 00:26:42 +0000 (0:00:00.277) 0:03:56.331 *** 2025-09-17 00:27:51.291489 | orchestrator | ok: [testbed-manager] 2025-09-17 00:27:51.291500 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:27:51.291512 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:27:51.291523 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:27:51.291534 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:27:51.291544 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:27:51.291555 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:27:51.291565 | orchestrator | 2025-09-17 00:27:51.291576 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-17 00:27:51.291587 | orchestrator | Wednesday 17 September 2025 00:26:48 +0000 (0:00:05.468) 0:04:01.800 *** 2025-09-17 00:27:51.291600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:27:51.291614 | orchestrator | 2025-09-17 00:27:51.291625 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-17 00:27:51.291636 | orchestrator | Wednesday 17 September 2025 00:26:48 +0000 (0:00:00.311) 0:04:02.111 *** 2025-09-17 00:27:51.291647 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-17 00:27:51.291658 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-17 00:27:51.291669 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:27:51.291680 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-17 00:27:51.291691 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-17 00:27:51.291701 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-17 00:27:51.291712 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-17 00:27:51.291722 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:27:51.291733 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-17 00:27:51.291744 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-17 00:27:51.291754 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:27:51.291818 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-17 00:27:51.291832 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-17 00:27:51.291844 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:27:51.291857 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-17 00:27:51.291869 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-17 00:27:51.291881 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:27:51.291892 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:27:51.291905 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-17 00:27:51.291917 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-17 00:27:51.291930 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:27:51.291942 | orchestrator | 2025-09-17 00:27:51.291955 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-17 00:27:51.291967 | orchestrator | Wednesday 17 September 2025 00:26:49 +0000 (0:00:00.254) 0:04:02.365 *** 2025-09-17 00:27:51.291997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:27:51.292011 | orchestrator | 2025-09-17 00:27:51.292023 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-17 00:27:51.292036 | orchestrator | Wednesday 17 September 2025 00:26:49 +0000 (0:00:00.378) 0:04:02.744 *** 2025-09-17 00:27:51.292048 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-17 00:27:51.292061 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-17 00:27:51.292073 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:27:51.292085 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-17 00:27:51.292097 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:27:51.292109 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:27:51.292120 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-17 00:27:51.292132 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:27:51.292144 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-17 00:27:51.292156 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-17 00:27:51.292168 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:27:51.292178 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:27:51.292189 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-17 00:27:51.292199 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:27:51.292209 | orchestrator | 2025-09-17 00:27:51.292220 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-17 00:27:51.292231 | orchestrator | Wednesday 17 September 2025 00:26:49 +0000 (0:00:00.291) 0:04:03.036 *** 2025-09-17 00:27:51.292241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:27:51.292252 | orchestrator | 2025-09-17 00:27:51.292263 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-17 00:27:51.292273 | orchestrator | Wednesday 17 September 2025 00:26:50 +0000 (0:00:00.428) 0:04:03.464 *** 2025-09-17 00:27:51.292284 | orchestrator | changed: [testbed-manager] 2025-09-17 00:27:51.292313 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:27:51.292325 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:27:51.292336 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:27:51.292346 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:27:51.292357 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:27:51.292367 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:27:51.292378 | orchestrator | 2025-09-17 00:27:51.292388 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-17 00:27:51.292408 | orchestrator | Wednesday 17 September 2025 00:27:23 +0000 (0:00:33.137) 0:04:36.602 *** 2025-09-17 00:27:51.292418 | orchestrator | changed: [testbed-manager] 2025-09-17 00:27:51.292429 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:27:51.292439 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:27:51.292450 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:27:51.292460 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:27:51.292471 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:27:51.292481 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:27:51.292492 | orchestrator | 2025-09-17 00:27:51.292503 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-17 00:27:51.292513 | orchestrator | Wednesday 17 September 2025 00:27:31 +0000 (0:00:08.050) 0:04:44.652 *** 2025-09-17 00:27:51.292524 | orchestrator | changed: [testbed-manager] 2025-09-17 00:27:51.292534 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:27:51.292545 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:27:51.292555 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:27:51.292566 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:27:51.292576 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:27:51.292587 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:27:51.292597 | orchestrator | 2025-09-17 00:27:51.292608 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-17 00:27:51.292619 | orchestrator | Wednesday 17 September 2025 00:27:39 +0000 (0:00:08.200) 0:04:52.852 *** 2025-09-17 00:27:51.292629 | orchestrator | ok: [testbed-manager] 2025-09-17 00:27:51.292640 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:27:51.292651 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:27:51.292662 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:27:51.292672 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:27:51.292683 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:27:51.292693 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:27:51.292703 | orchestrator | 2025-09-17 00:27:51.292714 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-17 00:27:51.292726 | orchestrator | Wednesday 17 September 2025 00:27:41 +0000 (0:00:01.722) 0:04:54.575 *** 2025-09-17 00:27:51.292737 | orchestrator | changed: [testbed-manager] 2025-09-17 00:27:51.292747 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:27:51.292758 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:27:51.292785 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:27:51.292796 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:27:51.292806 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:27:51.292817 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:27:51.292828 | orchestrator | 2025-09-17 00:27:51.292838 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-17 00:27:51.292849 | orchestrator | Wednesday 17 September 2025 00:27:47 +0000 (0:00:06.100) 0:05:00.676 *** 2025-09-17 00:27:51.292861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:27:51.292874 | orchestrator | 2025-09-17 00:27:51.292885 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-17 00:27:51.292901 | orchestrator | Wednesday 17 September 2025 00:27:47 +0000 (0:00:00.511) 0:05:01.188 *** 2025-09-17 00:27:51.292912 | orchestrator | changed: [testbed-manager] 2025-09-17 00:27:51.292923 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:27:51.292933 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:27:51.292943 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:27:51.292954 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:27:51.292965 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:27:51.292975 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:27:51.292986 | orchestrator | 2025-09-17 00:27:51.292997 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-17 00:27:51.293016 | orchestrator | Wednesday 17 September 2025 00:27:48 +0000 (0:00:00.753) 0:05:01.941 *** 2025-09-17 00:27:51.293026 | orchestrator | ok: [testbed-manager] 2025-09-17 00:27:51.293037 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:27:51.293048 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:27:51.293058 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:27:51.293069 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:27:51.293079 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:27:51.293090 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:27:51.293100 | orchestrator | 2025-09-17 00:27:51.293111 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-17 00:27:51.293122 | orchestrator | Wednesday 17 September 2025 00:27:50 +0000 (0:00:01.629) 0:05:03.570 *** 2025-09-17 00:27:51.293132 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:27:51.293143 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:27:51.293153 | orchestrator | changed: [testbed-manager] 2025-09-17 00:27:51.293164 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:27:51.293174 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:27:51.293185 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:27:51.293196 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:27:51.293206 | orchestrator | 2025-09-17 00:27:51.293217 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-17 00:27:51.293228 | orchestrator | Wednesday 17 September 2025 00:27:51 +0000 (0:00:00.785) 0:05:04.356 *** 2025-09-17 00:27:51.293238 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:27:51.293249 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:27:51.293259 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:27:51.293270 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:27:51.293280 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:27:51.293291 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:27:51.293301 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:27:51.293312 | orchestrator | 2025-09-17 00:27:51.293322 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-17 00:27:51.293340 | orchestrator | Wednesday 17 September 2025 00:27:51 +0000 (0:00:00.275) 0:05:04.631 *** 2025-09-17 00:28:16.331529 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:28:16.331721 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:28:16.331741 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:28:16.331753 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:28:16.331832 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:28:16.331854 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:28:16.331870 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:28:16.331889 | orchestrator | 2025-09-17 00:28:16.331909 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-17 00:28:16.331931 | orchestrator | Wednesday 17 September 2025 00:27:51 +0000 (0:00:00.373) 0:05:05.005 *** 2025-09-17 00:28:16.331949 | orchestrator | ok: [testbed-manager] 2025-09-17 00:28:16.331969 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:28:16.331987 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:28:16.332006 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:28:16.332025 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:28:16.332045 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:28:16.332065 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:28:16.332084 | orchestrator | 2025-09-17 00:28:16.332105 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-17 00:28:16.332124 | orchestrator | Wednesday 17 September 2025 00:27:51 +0000 (0:00:00.267) 0:05:05.273 *** 2025-09-17 00:28:16.332144 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:28:16.332158 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:28:16.332171 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:28:16.332183 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:28:16.332195 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:28:16.332207 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:28:16.332220 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:28:16.332261 | orchestrator | 2025-09-17 00:28:16.332275 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-17 00:28:16.332289 | orchestrator | Wednesday 17 September 2025 00:27:52 +0000 (0:00:00.297) 0:05:05.570 *** 2025-09-17 00:28:16.332301 | orchestrator | ok: [testbed-manager] 2025-09-17 00:28:16.332313 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:28:16.332325 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:28:16.332337 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:28:16.332350 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:28:16.332361 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:28:16.332373 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:28:16.332385 | orchestrator | 2025-09-17 00:28:16.332396 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-17 00:28:16.332407 | orchestrator | Wednesday 17 September 2025 00:27:52 +0000 (0:00:00.320) 0:05:05.890 *** 2025-09-17 00:28:16.332417 | orchestrator | ok: [testbed-manager] =>  2025-09-17 00:28:16.332428 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 00:28:16.332439 | orchestrator | ok: [testbed-node-3] =>  2025-09-17 00:28:16.332449 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 00:28:16.332460 | orchestrator | ok: [testbed-node-4] =>  2025-09-17 00:28:16.332470 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 00:28:16.332480 | orchestrator | ok: [testbed-node-5] =>  2025-09-17 00:28:16.332491 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 00:28:16.332502 | orchestrator | ok: [testbed-node-0] =>  2025-09-17 00:28:16.332512 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 00:28:16.332524 | orchestrator | ok: [testbed-node-1] =>  2025-09-17 00:28:16.332542 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 00:28:16.332561 | orchestrator | ok: [testbed-node-2] =>  2025-09-17 00:28:16.332578 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 00:28:16.332596 | orchestrator | 2025-09-17 00:28:16.332614 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-17 00:28:16.332634 | orchestrator | Wednesday 17 September 2025 00:27:52 +0000 (0:00:00.276) 0:05:06.167 *** 2025-09-17 00:28:16.332653 | orchestrator | ok: [testbed-manager] =>  2025-09-17 00:28:16.332672 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 00:28:16.332684 | orchestrator | ok: [testbed-node-3] =>  2025-09-17 00:28:16.332695 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 00:28:16.332705 | orchestrator | ok: [testbed-node-4] =>  2025-09-17 00:28:16.332716 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 00:28:16.332727 | orchestrator | ok: [testbed-node-5] =>  2025-09-17 00:28:16.332737 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 00:28:16.332748 | orchestrator | ok: [testbed-node-0] =>  2025-09-17 00:28:16.332758 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 00:28:16.332817 | orchestrator | ok: [testbed-node-1] =>  2025-09-17 00:28:16.332839 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 00:28:16.332859 | orchestrator | ok: [testbed-node-2] =>  2025-09-17 00:28:16.332880 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 00:28:16.332900 | orchestrator | 2025-09-17 00:28:16.332921 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-17 00:28:16.332942 | orchestrator | Wednesday 17 September 2025 00:27:53 +0000 (0:00:00.275) 0:05:06.442 *** 2025-09-17 00:28:16.332961 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:28:16.332980 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:28:16.333002 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:28:16.333021 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:28:16.333043 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:28:16.333065 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:28:16.333087 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:28:16.333109 | orchestrator | 2025-09-17 00:28:16.333130 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-17 00:28:16.333149 | orchestrator | Wednesday 17 September 2025 00:27:53 +0000 (0:00:00.260) 0:05:06.703 *** 2025-09-17 00:28:16.333169 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:28:16.333203 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:28:16.333219 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:28:16.333230 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:28:16.333241 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:28:16.333252 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:28:16.333262 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:28:16.333273 | orchestrator | 2025-09-17 00:28:16.333284 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-17 00:28:16.333295 | orchestrator | Wednesday 17 September 2025 00:27:53 +0000 (0:00:00.265) 0:05:06.968 *** 2025-09-17 00:28:16.333330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:28:16.333345 | orchestrator | 2025-09-17 00:28:16.333357 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-17 00:28:16.333367 | orchestrator | Wednesday 17 September 2025 00:27:54 +0000 (0:00:00.387) 0:05:07.355 *** 2025-09-17 00:28:16.333378 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:28:16.333389 | orchestrator | ok: [testbed-manager] 2025-09-17 00:28:16.333400 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:28:16.333411 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:28:16.333421 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:28:16.333432 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:28:16.333443 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:28:16.333453 | orchestrator | 2025-09-17 00:28:16.333464 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-17 00:28:16.333475 | orchestrator | Wednesday 17 September 2025 00:27:54 +0000 (0:00:00.768) 0:05:08.124 *** 2025-09-17 00:28:16.333485 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:28:16.333496 | orchestrator | ok: [testbed-manager] 2025-09-17 00:28:16.333532 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:28:16.333551 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:28:16.333570 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:28:16.333588 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:28:16.333605 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:28:16.333623 | orchestrator | 2025-09-17 00:28:16.333641 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-17 00:28:16.333662 | orchestrator | Wednesday 17 September 2025 00:27:57 +0000 (0:00:03.016) 0:05:11.140 *** 2025-09-17 00:28:16.333681 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-17 00:28:16.333699 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-17 00:28:16.333715 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-17 00:28:16.333726 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-17 00:28:16.333737 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-17 00:28:16.333747 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-17 00:28:16.333788 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:28:16.333811 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-17 00:28:16.333822 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-17 00:28:16.333833 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-17 00:28:16.333843 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:28:16.333854 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-17 00:28:16.333864 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-17 00:28:16.333875 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-17 00:28:16.333885 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:28:16.333896 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-17 00:28:16.333906 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-17 00:28:16.333917 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-17 00:28:16.333937 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:28:16.333948 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-17 00:28:16.333958 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-17 00:28:16.333969 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-17 00:28:16.333979 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:28:16.333990 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:28:16.334007 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-17 00:28:16.334075 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-17 00:28:16.334090 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-17 00:28:16.334101 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:28:16.334112 | orchestrator | 2025-09-17 00:28:16.334122 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-17 00:28:16.334133 | orchestrator | Wednesday 17 September 2025 00:27:58 +0000 (0:00:00.551) 0:05:11.692 *** 2025-09-17 00:28:16.334144 | orchestrator | ok: [testbed-manager] 2025-09-17 00:28:16.334154 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:28:16.334165 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:28:16.334176 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:28:16.334186 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:28:16.334197 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:28:16.334207 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:28:16.334218 | orchestrator | 2025-09-17 00:28:16.334229 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-17 00:28:16.334239 | orchestrator | Wednesday 17 September 2025 00:28:04 +0000 (0:00:05.682) 0:05:17.375 *** 2025-09-17 00:28:16.334250 | orchestrator | ok: [testbed-manager] 2025-09-17 00:28:16.334260 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:28:16.334271 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:28:16.334282 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:28:16.334292 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:28:16.334303 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:28:16.334313 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:28:16.334324 | orchestrator | 2025-09-17 00:28:16.334334 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-17 00:28:16.334345 | orchestrator | Wednesday 17 September 2025 00:28:05 +0000 (0:00:01.200) 0:05:18.575 *** 2025-09-17 00:28:16.334356 | orchestrator | ok: [testbed-manager] 2025-09-17 00:28:16.334366 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:28:16.334377 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:28:16.334387 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:28:16.334398 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:28:16.334409 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:28:16.334419 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:28:16.334429 | orchestrator | 2025-09-17 00:28:16.334440 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-17 00:28:16.334451 | orchestrator | Wednesday 17 September 2025 00:28:12 +0000 (0:00:07.638) 0:05:26.213 *** 2025-09-17 00:28:16.334462 | orchestrator | changed: [testbed-manager] 2025-09-17 00:28:16.334473 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:28:16.334483 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:28:16.334505 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:01.934334 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:01.934480 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:01.934497 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:01.934509 | orchestrator | 2025-09-17 00:29:01.934522 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-17 00:29:01.934535 | orchestrator | Wednesday 17 September 2025 00:28:16 +0000 (0:00:03.443) 0:05:29.657 *** 2025-09-17 00:29:01.934547 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:01.934559 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:01.934569 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:01.934608 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:01.934619 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:01.934630 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:01.934640 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:01.934651 | orchestrator | 2025-09-17 00:29:01.934662 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-17 00:29:01.934672 | orchestrator | Wednesday 17 September 2025 00:28:17 +0000 (0:00:01.375) 0:05:31.033 *** 2025-09-17 00:29:01.934691 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:01.934709 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:01.934726 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:01.934745 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:01.934765 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:01.934813 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:01.934829 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:01.934841 | orchestrator | 2025-09-17 00:29:01.934854 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-17 00:29:01.934866 | orchestrator | Wednesday 17 September 2025 00:28:19 +0000 (0:00:01.398) 0:05:32.432 *** 2025-09-17 00:29:01.934878 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:01.934891 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:01.934904 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:01.934915 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:01.934928 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:01.934940 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:01.934952 | orchestrator | changed: [testbed-manager] 2025-09-17 00:29:01.934965 | orchestrator | 2025-09-17 00:29:01.934977 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-17 00:29:01.934990 | orchestrator | Wednesday 17 September 2025 00:28:20 +0000 (0:00:01.807) 0:05:34.239 *** 2025-09-17 00:29:01.935002 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:01.935015 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:01.935028 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:01.935040 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:01.935052 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:01.935064 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:01.935076 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:01.935088 | orchestrator | 2025-09-17 00:29:01.935101 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-17 00:29:01.935113 | orchestrator | Wednesday 17 September 2025 00:28:30 +0000 (0:00:09.852) 0:05:44.091 *** 2025-09-17 00:29:01.935126 | orchestrator | changed: [testbed-manager] 2025-09-17 00:29:01.935138 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:01.935150 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:01.935162 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:01.935175 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:01.935187 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:01.935198 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:01.935208 | orchestrator | 2025-09-17 00:29:01.935219 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-17 00:29:01.935248 | orchestrator | Wednesday 17 September 2025 00:28:31 +0000 (0:00:00.912) 0:05:45.003 *** 2025-09-17 00:29:01.935259 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:01.935270 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:01.935280 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:01.935290 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:01.935301 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:01.935312 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:01.935322 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:01.935333 | orchestrator | 2025-09-17 00:29:01.935343 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-17 00:29:01.935354 | orchestrator | Wednesday 17 September 2025 00:28:41 +0000 (0:00:09.819) 0:05:54.823 *** 2025-09-17 00:29:01.935375 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:01.935385 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:01.935396 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:01.935406 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:01.935417 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:01.935427 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:01.935438 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:01.935448 | orchestrator | 2025-09-17 00:29:01.935459 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-17 00:29:01.935469 | orchestrator | Wednesday 17 September 2025 00:28:52 +0000 (0:00:11.165) 0:06:05.989 *** 2025-09-17 00:29:01.935480 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-17 00:29:01.935491 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-17 00:29:01.935502 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-17 00:29:01.935512 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-17 00:29:01.935523 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-17 00:29:01.935533 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-17 00:29:01.935544 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-17 00:29:01.935554 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-17 00:29:01.935564 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-17 00:29:01.935575 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-17 00:29:01.935585 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-17 00:29:01.935596 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-17 00:29:01.935606 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-17 00:29:01.935617 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-17 00:29:01.935628 | orchestrator | 2025-09-17 00:29:01.935638 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-17 00:29:01.935669 | orchestrator | Wednesday 17 September 2025 00:28:53 +0000 (0:00:01.230) 0:06:07.220 *** 2025-09-17 00:29:01.935681 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:01.935691 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:01.935702 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:01.935712 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:01.935723 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:01.935736 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:01.935754 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:01.935797 | orchestrator | 2025-09-17 00:29:01.935819 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-17 00:29:01.935840 | orchestrator | Wednesday 17 September 2025 00:28:54 +0000 (0:00:00.508) 0:06:07.728 *** 2025-09-17 00:29:01.935858 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:01.935876 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:01.935888 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:01.935898 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:01.935909 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:01.935919 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:01.935929 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:01.935940 | orchestrator | 2025-09-17 00:29:01.935951 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-17 00:29:01.935963 | orchestrator | Wednesday 17 September 2025 00:28:57 +0000 (0:00:03.210) 0:06:10.939 *** 2025-09-17 00:29:01.935974 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:01.935984 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:01.935995 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:01.936005 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:01.936016 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:01.936026 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:01.936036 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:01.936056 | orchestrator | 2025-09-17 00:29:01.936069 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-17 00:29:01.936080 | orchestrator | Wednesday 17 September 2025 00:28:58 +0000 (0:00:00.476) 0:06:11.415 *** 2025-09-17 00:29:01.936091 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-17 00:29:01.936102 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-17 00:29:01.936113 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:01.936123 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-17 00:29:01.936134 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-17 00:29:01.936144 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:01.936155 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-17 00:29:01.936165 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-17 00:29:01.936176 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:01.936186 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-17 00:29:01.936196 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-17 00:29:01.936207 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:01.936217 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-17 00:29:01.936228 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-17 00:29:01.936238 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:01.936249 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-17 00:29:01.936266 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-17 00:29:01.936277 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:01.936287 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-17 00:29:01.936297 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-17 00:29:01.936308 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:01.936318 | orchestrator | 2025-09-17 00:29:01.936329 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-17 00:29:01.936340 | orchestrator | Wednesday 17 September 2025 00:28:58 +0000 (0:00:00.682) 0:06:12.097 *** 2025-09-17 00:29:01.936350 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:01.936361 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:01.936371 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:01.936382 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:01.936392 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:01.936403 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:01.936414 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:01.936424 | orchestrator | 2025-09-17 00:29:01.936434 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-17 00:29:01.936445 | orchestrator | Wednesday 17 September 2025 00:28:59 +0000 (0:00:00.500) 0:06:12.598 *** 2025-09-17 00:29:01.936456 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:01.936466 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:01.936477 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:01.936487 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:01.936498 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:01.936508 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:01.936519 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:01.936529 | orchestrator | 2025-09-17 00:29:01.936540 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-17 00:29:01.936550 | orchestrator | Wednesday 17 September 2025 00:28:59 +0000 (0:00:00.477) 0:06:13.076 *** 2025-09-17 00:29:01.936561 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:01.936571 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:01.936582 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:01.936592 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:01.936603 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:01.936620 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:01.936631 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:01.936641 | orchestrator | 2025-09-17 00:29:01.936652 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-17 00:29:01.936663 | orchestrator | Wednesday 17 September 2025 00:29:00 +0000 (0:00:00.524) 0:06:13.600 *** 2025-09-17 00:29:01.936674 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:01.936692 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:23.448186 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:23.448336 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:23.448351 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:23.448362 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:23.448374 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:23.448385 | orchestrator | 2025-09-17 00:29:23.448399 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-17 00:29:23.448412 | orchestrator | Wednesday 17 September 2025 00:29:01 +0000 (0:00:01.676) 0:06:15.277 *** 2025-09-17 00:29:23.448425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:29:23.448439 | orchestrator | 2025-09-17 00:29:23.448450 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-17 00:29:23.448460 | orchestrator | Wednesday 17 September 2025 00:29:02 +0000 (0:00:01.024) 0:06:16.301 *** 2025-09-17 00:29:23.448471 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:23.448482 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:23.448494 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:23.448505 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:23.448515 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:23.448526 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:23.448537 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:23.448547 | orchestrator | 2025-09-17 00:29:23.448558 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-17 00:29:23.448569 | orchestrator | Wednesday 17 September 2025 00:29:03 +0000 (0:00:00.883) 0:06:17.185 *** 2025-09-17 00:29:23.448579 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:23.448590 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:23.448600 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:23.448611 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:23.448623 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:23.448634 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:23.448645 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:23.448655 | orchestrator | 2025-09-17 00:29:23.448668 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-17 00:29:23.448681 | orchestrator | Wednesday 17 September 2025 00:29:04 +0000 (0:00:00.813) 0:06:17.998 *** 2025-09-17 00:29:23.448693 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:23.448705 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:23.448718 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:23.448731 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:23.448743 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:23.448755 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:23.448768 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:23.448809 | orchestrator | 2025-09-17 00:29:23.448824 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-17 00:29:23.448838 | orchestrator | Wednesday 17 September 2025 00:29:05 +0000 (0:00:01.327) 0:06:19.326 *** 2025-09-17 00:29:23.448851 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:23.448863 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:23.448875 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:23.448887 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:23.448899 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:23.448911 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:23.448954 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:23.448967 | orchestrator | 2025-09-17 00:29:23.448979 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-17 00:29:23.449010 | orchestrator | Wednesday 17 September 2025 00:29:07 +0000 (0:00:01.489) 0:06:20.815 *** 2025-09-17 00:29:23.449023 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:23.449035 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:23.449046 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:23.449057 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:23.449067 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:23.449078 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:23.449088 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:23.449099 | orchestrator | 2025-09-17 00:29:23.449109 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-17 00:29:23.449120 | orchestrator | Wednesday 17 September 2025 00:29:08 +0000 (0:00:01.324) 0:06:22.140 *** 2025-09-17 00:29:23.449130 | orchestrator | changed: [testbed-manager] 2025-09-17 00:29:23.449141 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:23.449151 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:23.449162 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:23.449172 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:23.449183 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:23.449193 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:23.449204 | orchestrator | 2025-09-17 00:29:23.449214 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-17 00:29:23.449225 | orchestrator | Wednesday 17 September 2025 00:29:10 +0000 (0:00:01.380) 0:06:23.520 *** 2025-09-17 00:29:23.449236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:29:23.449247 | orchestrator | 2025-09-17 00:29:23.449257 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-17 00:29:23.449268 | orchestrator | Wednesday 17 September 2025 00:29:11 +0000 (0:00:00.975) 0:06:24.496 *** 2025-09-17 00:29:23.449278 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:23.449289 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:23.449300 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:23.449311 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:23.449321 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:23.449332 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:23.449342 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:23.449353 | orchestrator | 2025-09-17 00:29:23.449363 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-17 00:29:23.449374 | orchestrator | Wednesday 17 September 2025 00:29:12 +0000 (0:00:01.311) 0:06:25.807 *** 2025-09-17 00:29:23.449385 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:23.449395 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:23.449426 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:23.449438 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:23.449448 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:23.449459 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:23.449469 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:23.449479 | orchestrator | 2025-09-17 00:29:23.449490 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-17 00:29:23.449501 | orchestrator | Wednesday 17 September 2025 00:29:13 +0000 (0:00:01.111) 0:06:26.919 *** 2025-09-17 00:29:23.449511 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:23.449522 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:23.449533 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:23.449543 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:23.449553 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:23.449564 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:23.449574 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:23.449584 | orchestrator | 2025-09-17 00:29:23.449595 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-17 00:29:23.449615 | orchestrator | Wednesday 17 September 2025 00:29:14 +0000 (0:00:01.095) 0:06:28.015 *** 2025-09-17 00:29:23.449625 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:23.449636 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:23.449646 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:23.449657 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:23.449668 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:23.449678 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:23.449688 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:23.449699 | orchestrator | 2025-09-17 00:29:23.449710 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-17 00:29:23.449721 | orchestrator | Wednesday 17 September 2025 00:29:15 +0000 (0:00:01.167) 0:06:29.182 *** 2025-09-17 00:29:23.449731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:29:23.449742 | orchestrator | 2025-09-17 00:29:23.449753 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 00:29:23.449763 | orchestrator | Wednesday 17 September 2025 00:29:16 +0000 (0:00:01.018) 0:06:30.201 *** 2025-09-17 00:29:23.449774 | orchestrator | 2025-09-17 00:29:23.449826 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 00:29:23.449844 | orchestrator | Wednesday 17 September 2025 00:29:16 +0000 (0:00:00.037) 0:06:30.239 *** 2025-09-17 00:29:23.449862 | orchestrator | 2025-09-17 00:29:23.449873 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 00:29:23.449884 | orchestrator | Wednesday 17 September 2025 00:29:16 +0000 (0:00:00.038) 0:06:30.277 *** 2025-09-17 00:29:23.449894 | orchestrator | 2025-09-17 00:29:23.449905 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 00:29:23.449916 | orchestrator | Wednesday 17 September 2025 00:29:16 +0000 (0:00:00.044) 0:06:30.322 *** 2025-09-17 00:29:23.449926 | orchestrator | 2025-09-17 00:29:23.449937 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 00:29:23.449948 | orchestrator | Wednesday 17 September 2025 00:29:17 +0000 (0:00:00.037) 0:06:30.360 *** 2025-09-17 00:29:23.449958 | orchestrator | 2025-09-17 00:29:23.449969 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 00:29:23.449979 | orchestrator | Wednesday 17 September 2025 00:29:17 +0000 (0:00:00.037) 0:06:30.398 *** 2025-09-17 00:29:23.449990 | orchestrator | 2025-09-17 00:29:23.450001 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 00:29:23.450012 | orchestrator | Wednesday 17 September 2025 00:29:17 +0000 (0:00:00.045) 0:06:30.444 *** 2025-09-17 00:29:23.450085 | orchestrator | 2025-09-17 00:29:23.450097 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-17 00:29:23.450108 | orchestrator | Wednesday 17 September 2025 00:29:17 +0000 (0:00:00.039) 0:06:30.484 *** 2025-09-17 00:29:23.450118 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:23.450129 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:23.450140 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:23.450150 | orchestrator | 2025-09-17 00:29:23.450161 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-17 00:29:23.450171 | orchestrator | Wednesday 17 September 2025 00:29:18 +0000 (0:00:01.216) 0:06:31.700 *** 2025-09-17 00:29:23.450182 | orchestrator | changed: [testbed-manager] 2025-09-17 00:29:23.450193 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:23.450203 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:23.450214 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:23.450225 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:23.450235 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:23.450256 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:23.450267 | orchestrator | 2025-09-17 00:29:23.450278 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-17 00:29:23.450297 | orchestrator | Wednesday 17 September 2025 00:29:19 +0000 (0:00:01.480) 0:06:33.181 *** 2025-09-17 00:29:23.450307 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:23.450318 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:23.450329 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:23.450339 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:23.450350 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:23.450360 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:23.450371 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:23.450381 | orchestrator | 2025-09-17 00:29:23.450392 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-17 00:29:23.450403 | orchestrator | Wednesday 17 September 2025 00:29:22 +0000 (0:00:02.510) 0:06:35.692 *** 2025-09-17 00:29:23.450413 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:23.450424 | orchestrator | 2025-09-17 00:29:23.450434 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-17 00:29:23.450445 | orchestrator | Wednesday 17 September 2025 00:29:22 +0000 (0:00:00.094) 0:06:35.787 *** 2025-09-17 00:29:23.450455 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:23.450466 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:23.450476 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:23.450487 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:23.450506 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:48.489322 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:48.489471 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:48.489484 | orchestrator | 2025-09-17 00:29:48.489495 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-17 00:29:48.489506 | orchestrator | Wednesday 17 September 2025 00:29:23 +0000 (0:00:01.000) 0:06:36.787 *** 2025-09-17 00:29:48.489515 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:48.489524 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:48.489532 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:48.489541 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:48.489549 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:48.489558 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:48.489566 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:48.489575 | orchestrator | 2025-09-17 00:29:48.489583 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-17 00:29:48.489592 | orchestrator | Wednesday 17 September 2025 00:29:23 +0000 (0:00:00.559) 0:06:37.347 *** 2025-09-17 00:29:48.489601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:29:48.489613 | orchestrator | 2025-09-17 00:29:48.489622 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-17 00:29:48.489631 | orchestrator | Wednesday 17 September 2025 00:29:25 +0000 (0:00:01.013) 0:06:38.360 *** 2025-09-17 00:29:48.489640 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:48.489649 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:48.489658 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:48.489666 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:48.489675 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:48.489684 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:48.489692 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:48.489701 | orchestrator | 2025-09-17 00:29:48.489709 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-17 00:29:48.489718 | orchestrator | Wednesday 17 September 2025 00:29:25 +0000 (0:00:00.830) 0:06:39.191 *** 2025-09-17 00:29:48.489727 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-17 00:29:48.489736 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-17 00:29:48.489744 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-17 00:29:48.489774 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-17 00:29:48.489825 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-17 00:29:48.489835 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-17 00:29:48.489844 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-17 00:29:48.489853 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-17 00:29:48.489862 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-17 00:29:48.489872 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-17 00:29:48.489882 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-17 00:29:48.489892 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-17 00:29:48.489902 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-17 00:29:48.489925 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-17 00:29:48.489935 | orchestrator | 2025-09-17 00:29:48.489945 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-17 00:29:48.489955 | orchestrator | Wednesday 17 September 2025 00:29:28 +0000 (0:00:02.428) 0:06:41.620 *** 2025-09-17 00:29:48.489965 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:48.489976 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:48.489985 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:48.489995 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:48.490005 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:48.490066 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:48.490078 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:48.490088 | orchestrator | 2025-09-17 00:29:48.490098 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-17 00:29:48.490106 | orchestrator | Wednesday 17 September 2025 00:29:28 +0000 (0:00:00.498) 0:06:42.118 *** 2025-09-17 00:29:48.490116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:29:48.490126 | orchestrator | 2025-09-17 00:29:48.490135 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-17 00:29:48.490144 | orchestrator | Wednesday 17 September 2025 00:29:29 +0000 (0:00:00.990) 0:06:43.109 *** 2025-09-17 00:29:48.490152 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:48.490160 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:48.490169 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:48.490177 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:48.490185 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:48.490194 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:48.490202 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:48.490210 | orchestrator | 2025-09-17 00:29:48.490219 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-17 00:29:48.490227 | orchestrator | Wednesday 17 September 2025 00:29:30 +0000 (0:00:00.835) 0:06:43.944 *** 2025-09-17 00:29:48.490236 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:48.490244 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:48.490252 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:48.490261 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:48.490269 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:48.490277 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:48.490285 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:48.490294 | orchestrator | 2025-09-17 00:29:48.490302 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-17 00:29:48.490326 | orchestrator | Wednesday 17 September 2025 00:29:31 +0000 (0:00:00.833) 0:06:44.778 *** 2025-09-17 00:29:48.490336 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:48.490344 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:48.490353 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:48.490361 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:48.490381 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:48.490390 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:48.490398 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:48.490407 | orchestrator | 2025-09-17 00:29:48.490415 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-17 00:29:48.490424 | orchestrator | Wednesday 17 September 2025 00:29:31 +0000 (0:00:00.496) 0:06:45.275 *** 2025-09-17 00:29:48.490432 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:48.490441 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:48.490449 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:48.490457 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:48.490466 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:48.490474 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:48.490482 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:48.490491 | orchestrator | 2025-09-17 00:29:48.490500 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-17 00:29:48.490508 | orchestrator | Wednesday 17 September 2025 00:29:33 +0000 (0:00:01.837) 0:06:47.113 *** 2025-09-17 00:29:48.490516 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:48.490525 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:48.490533 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:48.490542 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:48.490550 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:48.490559 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:48.490567 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:48.490575 | orchestrator | 2025-09-17 00:29:48.490584 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-17 00:29:48.490592 | orchestrator | Wednesday 17 September 2025 00:29:34 +0000 (0:00:00.420) 0:06:47.533 *** 2025-09-17 00:29:48.490601 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:48.490609 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:48.490618 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:48.490626 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:48.490634 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:48.490642 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:48.490651 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:48.490659 | orchestrator | 2025-09-17 00:29:48.490668 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-17 00:29:48.490676 | orchestrator | Wednesday 17 September 2025 00:29:41 +0000 (0:00:07.619) 0:06:55.153 *** 2025-09-17 00:29:48.490685 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:48.490693 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:48.490701 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:48.490710 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:48.490718 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:48.490726 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:48.490735 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:48.490743 | orchestrator | 2025-09-17 00:29:48.490752 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-17 00:29:48.490760 | orchestrator | Wednesday 17 September 2025 00:29:43 +0000 (0:00:01.300) 0:06:56.453 *** 2025-09-17 00:29:48.490769 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:48.490777 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:48.490811 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:48.490820 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:48.490828 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:48.490841 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:48.490850 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:48.490858 | orchestrator | 2025-09-17 00:29:48.490867 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-17 00:29:48.490875 | orchestrator | Wednesday 17 September 2025 00:29:44 +0000 (0:00:01.671) 0:06:58.125 *** 2025-09-17 00:29:48.490884 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:48.490900 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:29:48.490908 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:29:48.490917 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:29:48.490925 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:29:48.490933 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:29:48.490942 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:29:48.490950 | orchestrator | 2025-09-17 00:29:48.490959 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-17 00:29:48.490967 | orchestrator | Wednesday 17 September 2025 00:29:46 +0000 (0:00:01.693) 0:06:59.818 *** 2025-09-17 00:29:48.490975 | orchestrator | ok: [testbed-manager] 2025-09-17 00:29:48.490984 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:29:48.490992 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:29:48.491001 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:29:48.491009 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:29:48.491017 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:29:48.491026 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:29:48.491034 | orchestrator | 2025-09-17 00:29:48.491043 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-17 00:29:48.491051 | orchestrator | Wednesday 17 September 2025 00:29:47 +0000 (0:00:00.784) 0:07:00.602 *** 2025-09-17 00:29:48.491060 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:48.491068 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:48.491076 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:48.491085 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:48.491093 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:48.491102 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:48.491110 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:48.491118 | orchestrator | 2025-09-17 00:29:48.491127 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-17 00:29:48.491135 | orchestrator | Wednesday 17 September 2025 00:29:48 +0000 (0:00:00.802) 0:07:01.405 *** 2025-09-17 00:29:48.491143 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:29:48.491152 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:29:48.491160 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:29:48.491168 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:29:48.491177 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:29:48.491185 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:29:48.491194 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:29:48.491202 | orchestrator | 2025-09-17 00:29:48.491216 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-17 00:30:20.102712 | orchestrator | Wednesday 17 September 2025 00:29:48 +0000 (0:00:00.425) 0:07:01.830 *** 2025-09-17 00:30:20.102864 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:20.102885 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:20.102905 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:20.102924 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:20.102943 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:20.102963 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:20.102975 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:20.102987 | orchestrator | 2025-09-17 00:30:20.102999 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-17 00:30:20.103010 | orchestrator | Wednesday 17 September 2025 00:29:48 +0000 (0:00:00.421) 0:07:02.252 *** 2025-09-17 00:30:20.103021 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:20.103032 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:20.103043 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:20.103053 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:20.103064 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:20.103074 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:20.103085 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:20.103096 | orchestrator | 2025-09-17 00:30:20.103107 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-17 00:30:20.103118 | orchestrator | Wednesday 17 September 2025 00:29:49 +0000 (0:00:00.442) 0:07:02.694 *** 2025-09-17 00:30:20.103157 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:20.103169 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:20.103179 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:20.103190 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:20.103200 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:20.103211 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:20.103221 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:20.103232 | orchestrator | 2025-09-17 00:30:20.103245 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-17 00:30:20.103258 | orchestrator | Wednesday 17 September 2025 00:29:49 +0000 (0:00:00.451) 0:07:03.146 *** 2025-09-17 00:30:20.103270 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:20.103282 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:20.103293 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:20.103305 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:20.103316 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:20.103329 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:20.103341 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:20.103353 | orchestrator | 2025-09-17 00:30:20.103366 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-17 00:30:20.103378 | orchestrator | Wednesday 17 September 2025 00:29:55 +0000 (0:00:05.359) 0:07:08.505 *** 2025-09-17 00:30:20.103390 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:30:20.103403 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:30:20.103416 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:30:20.103428 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:30:20.103440 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:30:20.103453 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:30:20.103463 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:30:20.103474 | orchestrator | 2025-09-17 00:30:20.103485 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-17 00:30:20.103496 | orchestrator | Wednesday 17 September 2025 00:29:55 +0000 (0:00:00.439) 0:07:08.945 *** 2025-09-17 00:30:20.103523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:30:20.103537 | orchestrator | 2025-09-17 00:30:20.103548 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-17 00:30:20.103559 | orchestrator | Wednesday 17 September 2025 00:29:56 +0000 (0:00:00.675) 0:07:09.621 *** 2025-09-17 00:30:20.103570 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:20.103580 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:20.103591 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:20.103602 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:20.103613 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:20.103623 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:20.103634 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:20.103644 | orchestrator | 2025-09-17 00:30:20.103655 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-17 00:30:20.103666 | orchestrator | Wednesday 17 September 2025 00:29:58 +0000 (0:00:01.927) 0:07:11.548 *** 2025-09-17 00:30:20.103676 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:20.103687 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:20.103697 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:20.103708 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:20.103718 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:20.103728 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:20.103739 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:20.103749 | orchestrator | 2025-09-17 00:30:20.103762 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-17 00:30:20.103781 | orchestrator | Wednesday 17 September 2025 00:29:59 +0000 (0:00:01.183) 0:07:12.732 *** 2025-09-17 00:30:20.103826 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:20.103846 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:20.103875 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:20.103887 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:20.103897 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:20.103907 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:20.103918 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:20.103928 | orchestrator | 2025-09-17 00:30:20.103939 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-17 00:30:20.103949 | orchestrator | Wednesday 17 September 2025 00:30:00 +0000 (0:00:00.825) 0:07:13.557 *** 2025-09-17 00:30:20.103964 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 00:30:20.103985 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 00:30:20.104003 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 00:30:20.104039 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 00:30:20.104051 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 00:30:20.104062 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 00:30:20.104073 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 00:30:20.104083 | orchestrator | 2025-09-17 00:30:20.104094 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-17 00:30:20.104105 | orchestrator | Wednesday 17 September 2025 00:30:01 +0000 (0:00:01.678) 0:07:15.235 *** 2025-09-17 00:30:20.104116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:30:20.104127 | orchestrator | 2025-09-17 00:30:20.104144 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-17 00:30:20.104163 | orchestrator | Wednesday 17 September 2025 00:30:02 +0000 (0:00:01.025) 0:07:16.260 *** 2025-09-17 00:30:20.104182 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:20.104199 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:20.104218 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:20.104231 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:20.104242 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:20.104252 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:20.104262 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:20.104273 | orchestrator | 2025-09-17 00:30:20.104283 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-17 00:30:20.104294 | orchestrator | Wednesday 17 September 2025 00:30:12 +0000 (0:00:09.158) 0:07:25.419 *** 2025-09-17 00:30:20.104304 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:20.104315 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:20.104325 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:20.104336 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:20.104346 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:20.104356 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:20.104367 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:20.104377 | orchestrator | 2025-09-17 00:30:20.104388 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-17 00:30:20.104399 | orchestrator | Wednesday 17 September 2025 00:30:13 +0000 (0:00:01.876) 0:07:27.296 *** 2025-09-17 00:30:20.104409 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:20.104420 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:20.104440 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:20.104450 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:20.104460 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:20.104471 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:20.104481 | orchestrator | 2025-09-17 00:30:20.104492 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-17 00:30:20.104509 | orchestrator | Wednesday 17 September 2025 00:30:15 +0000 (0:00:01.267) 0:07:28.563 *** 2025-09-17 00:30:20.104520 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:20.104530 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:20.104541 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:20.104551 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:20.104562 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:20.104572 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:20.104583 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:20.104593 | orchestrator | 2025-09-17 00:30:20.104603 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-17 00:30:20.104614 | orchestrator | 2025-09-17 00:30:20.104624 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-17 00:30:20.104635 | orchestrator | Wednesday 17 September 2025 00:30:16 +0000 (0:00:01.421) 0:07:29.985 *** 2025-09-17 00:30:20.104645 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:30:20.104656 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:30:20.104666 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:30:20.104677 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:30:20.104687 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:30:20.104697 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:30:20.104708 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:30:20.104718 | orchestrator | 2025-09-17 00:30:20.104729 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-17 00:30:20.104739 | orchestrator | 2025-09-17 00:30:20.104750 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-17 00:30:20.104760 | orchestrator | Wednesday 17 September 2025 00:30:17 +0000 (0:00:00.513) 0:07:30.498 *** 2025-09-17 00:30:20.104771 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:20.104781 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:20.104829 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:20.104842 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:20.104853 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:20.104863 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:20.104873 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:20.104884 | orchestrator | 2025-09-17 00:30:20.104894 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-17 00:30:20.104905 | orchestrator | Wednesday 17 September 2025 00:30:18 +0000 (0:00:01.331) 0:07:31.829 *** 2025-09-17 00:30:20.104915 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:20.104926 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:20.104936 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:20.104947 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:20.104957 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:20.104968 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:20.104978 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:20.104988 | orchestrator | 2025-09-17 00:30:20.104999 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-17 00:30:20.105017 | orchestrator | Wednesday 17 September 2025 00:30:20 +0000 (0:00:01.611) 0:07:33.440 *** 2025-09-17 00:30:43.158211 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:30:43.158333 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:30:43.158348 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:30:43.158360 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:30:43.158372 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:30:43.158383 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:30:43.158394 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:30:43.158405 | orchestrator | 2025-09-17 00:30:43.158439 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-17 00:30:43.158452 | orchestrator | Wednesday 17 September 2025 00:30:20 +0000 (0:00:00.453) 0:07:33.894 *** 2025-09-17 00:30:43.158463 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:30:43.158475 | orchestrator | 2025-09-17 00:30:43.158486 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-17 00:30:43.158496 | orchestrator | Wednesday 17 September 2025 00:30:21 +0000 (0:00:00.952) 0:07:34.847 *** 2025-09-17 00:30:43.158509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:30:43.158523 | orchestrator | 2025-09-17 00:30:43.158534 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-17 00:30:43.158544 | orchestrator | Wednesday 17 September 2025 00:30:22 +0000 (0:00:00.789) 0:07:35.637 *** 2025-09-17 00:30:43.158555 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:43.158566 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:43.158576 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:43.158587 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:43.158597 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:43.158608 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:43.158618 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:43.158629 | orchestrator | 2025-09-17 00:30:43.158639 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-17 00:30:43.158650 | orchestrator | Wednesday 17 September 2025 00:30:30 +0000 (0:00:08.146) 0:07:43.784 *** 2025-09-17 00:30:43.158660 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:43.158671 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:43.158681 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:43.158692 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:43.158702 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:43.158713 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:43.158723 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:43.158734 | orchestrator | 2025-09-17 00:30:43.158746 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-17 00:30:43.158759 | orchestrator | Wednesday 17 September 2025 00:30:31 +0000 (0:00:00.857) 0:07:44.641 *** 2025-09-17 00:30:43.158771 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:43.158783 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:43.158820 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:43.158832 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:43.158845 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:43.158858 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:43.158871 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:43.158882 | orchestrator | 2025-09-17 00:30:43.158894 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-17 00:30:43.158908 | orchestrator | Wednesday 17 September 2025 00:30:32 +0000 (0:00:01.661) 0:07:46.303 *** 2025-09-17 00:30:43.158920 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:43.158932 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:43.158944 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:43.158956 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:43.158968 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:43.158981 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:43.158993 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:43.159004 | orchestrator | 2025-09-17 00:30:43.159014 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-17 00:30:43.159025 | orchestrator | Wednesday 17 September 2025 00:30:34 +0000 (0:00:01.728) 0:07:48.032 *** 2025-09-17 00:30:43.159036 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:43.159055 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:43.159065 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:43.159076 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:43.159086 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:43.159097 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:43.159107 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:43.159118 | orchestrator | 2025-09-17 00:30:43.159128 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-17 00:30:43.159139 | orchestrator | Wednesday 17 September 2025 00:30:35 +0000 (0:00:01.155) 0:07:49.188 *** 2025-09-17 00:30:43.159149 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:43.159160 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:43.159170 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:43.159181 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:43.159191 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:43.159202 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:43.159212 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:43.159222 | orchestrator | 2025-09-17 00:30:43.159233 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-17 00:30:43.159243 | orchestrator | 2025-09-17 00:30:43.159254 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-17 00:30:43.159309 | orchestrator | Wednesday 17 September 2025 00:30:37 +0000 (0:00:01.424) 0:07:50.613 *** 2025-09-17 00:30:43.159322 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:30:43.159333 | orchestrator | 2025-09-17 00:30:43.159344 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-17 00:30:43.159371 | orchestrator | Wednesday 17 September 2025 00:30:38 +0000 (0:00:00.852) 0:07:51.466 *** 2025-09-17 00:30:43.159383 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:43.159395 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:43.159405 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:43.159416 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:43.159427 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:43.159437 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:43.159448 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:43.159458 | orchestrator | 2025-09-17 00:30:43.159469 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-17 00:30:43.159480 | orchestrator | Wednesday 17 September 2025 00:30:38 +0000 (0:00:00.838) 0:07:52.304 *** 2025-09-17 00:30:43.159491 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:43.159501 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:43.159512 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:43.159522 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:43.159533 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:43.159543 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:43.159553 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:43.159564 | orchestrator | 2025-09-17 00:30:43.159574 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-17 00:30:43.159585 | orchestrator | Wednesday 17 September 2025 00:30:40 +0000 (0:00:01.261) 0:07:53.566 *** 2025-09-17 00:30:43.159596 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:30:43.159606 | orchestrator | 2025-09-17 00:30:43.159617 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-17 00:30:43.159628 | orchestrator | Wednesday 17 September 2025 00:30:41 +0000 (0:00:00.871) 0:07:54.437 *** 2025-09-17 00:30:43.159638 | orchestrator | ok: [testbed-manager] 2025-09-17 00:30:43.159649 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:30:43.159659 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:30:43.159669 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:30:43.159680 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:30:43.159699 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:30:43.159710 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:30:43.159720 | orchestrator | 2025-09-17 00:30:43.159731 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-17 00:30:43.159742 | orchestrator | Wednesday 17 September 2025 00:30:41 +0000 (0:00:00.773) 0:07:55.211 *** 2025-09-17 00:30:43.159752 | orchestrator | changed: [testbed-manager] 2025-09-17 00:30:43.159763 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:30:43.159773 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:30:43.159784 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:30:43.159831 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:30:43.159843 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:30:43.159854 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:30:43.159865 | orchestrator | 2025-09-17 00:30:43.159875 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:30:43.159887 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-17 00:30:43.159899 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-17 00:30:43.159915 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-17 00:30:43.159926 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-17 00:30:43.159937 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-17 00:30:43.159948 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-17 00:30:43.159959 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-17 00:30:43.159970 | orchestrator | 2025-09-17 00:30:43.159981 | orchestrator | 2025-09-17 00:30:43.159991 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:30:43.160002 | orchestrator | Wednesday 17 September 2025 00:30:43 +0000 (0:00:01.273) 0:07:56.484 *** 2025-09-17 00:30:43.160013 | orchestrator | =============================================================================== 2025-09-17 00:30:43.160023 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.09s 2025-09-17 00:30:43.160034 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.10s 2025-09-17 00:30:43.160045 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.14s 2025-09-17 00:30:43.160055 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.64s 2025-09-17 00:30:43.160066 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.49s 2025-09-17 00:30:43.160077 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.17s 2025-09-17 00:30:43.160088 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.99s 2025-09-17 00:30:43.160099 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.85s 2025-09-17 00:30:43.160109 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.82s 2025-09-17 00:30:43.160120 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.16s 2025-09-17 00:30:43.160137 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.20s 2025-09-17 00:30:43.608489 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.15s 2025-09-17 00:30:43.608581 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.11s 2025-09-17 00:30:43.608620 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.05s 2025-09-17 00:30:43.608632 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.64s 2025-09-17 00:30:43.608644 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.62s 2025-09-17 00:30:43.608655 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.58s 2025-09-17 00:30:43.608666 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.10s 2025-09-17 00:30:43.608736 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.02s 2025-09-17 00:30:43.608747 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.68s 2025-09-17 00:30:43.913757 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-17 00:30:43.913897 | orchestrator | + osism apply network 2025-09-17 00:30:56.628605 | orchestrator | 2025-09-17 00:30:56 | INFO  | Task 96a8c9d4-5f89-4b9d-9b4a-486cd42fbfcb (network) was prepared for execution. 2025-09-17 00:30:56.628730 | orchestrator | 2025-09-17 00:30:56 | INFO  | It takes a moment until task 96a8c9d4-5f89-4b9d-9b4a-486cd42fbfcb (network) has been started and output is visible here. 2025-09-17 00:31:23.739653 | orchestrator | 2025-09-17 00:31:23.739795 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-17 00:31:23.739881 | orchestrator | 2025-09-17 00:31:23.739904 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-17 00:31:23.739916 | orchestrator | Wednesday 17 September 2025 00:31:00 +0000 (0:00:00.238) 0:00:00.238 *** 2025-09-17 00:31:23.739927 | orchestrator | ok: [testbed-manager] 2025-09-17 00:31:23.739940 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:31:23.739951 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:31:23.739963 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:31:23.739974 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:31:23.739985 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:31:23.739995 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:31:23.740006 | orchestrator | 2025-09-17 00:31:23.740017 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-17 00:31:23.740028 | orchestrator | Wednesday 17 September 2025 00:31:00 +0000 (0:00:00.595) 0:00:00.834 *** 2025-09-17 00:31:23.740041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:31:23.740055 | orchestrator | 2025-09-17 00:31:23.740066 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-17 00:31:23.740077 | orchestrator | Wednesday 17 September 2025 00:31:02 +0000 (0:00:01.055) 0:00:01.889 *** 2025-09-17 00:31:23.740087 | orchestrator | ok: [testbed-manager] 2025-09-17 00:31:23.740098 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:31:23.740109 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:31:23.740120 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:31:23.740130 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:31:23.740141 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:31:23.740154 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:31:23.740166 | orchestrator | 2025-09-17 00:31:23.740179 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-17 00:31:23.740191 | orchestrator | Wednesday 17 September 2025 00:31:04 +0000 (0:00:02.224) 0:00:04.114 *** 2025-09-17 00:31:23.740203 | orchestrator | ok: [testbed-manager] 2025-09-17 00:31:23.740214 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:31:23.740227 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:31:23.740238 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:31:23.740250 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:31:23.740262 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:31:23.740274 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:31:23.740286 | orchestrator | 2025-09-17 00:31:23.740298 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-17 00:31:23.740337 | orchestrator | Wednesday 17 September 2025 00:31:05 +0000 (0:00:01.614) 0:00:05.729 *** 2025-09-17 00:31:23.740351 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-17 00:31:23.740363 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-17 00:31:23.740376 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-17 00:31:23.740388 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-17 00:31:23.740400 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-17 00:31:23.740412 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-17 00:31:23.740424 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-17 00:31:23.740437 | orchestrator | 2025-09-17 00:31:23.740449 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-17 00:31:23.740461 | orchestrator | Wednesday 17 September 2025 00:31:06 +0000 (0:00:00.893) 0:00:06.622 *** 2025-09-17 00:31:23.740474 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 00:31:23.740487 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 00:31:23.740499 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-17 00:31:23.740510 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-17 00:31:23.740520 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-17 00:31:23.740530 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 00:31:23.740541 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-17 00:31:23.740551 | orchestrator | 2025-09-17 00:31:23.740562 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-17 00:31:23.740573 | orchestrator | Wednesday 17 September 2025 00:31:09 +0000 (0:00:02.939) 0:00:09.562 *** 2025-09-17 00:31:23.740583 | orchestrator | changed: [testbed-manager] 2025-09-17 00:31:23.740595 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:31:23.740605 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:31:23.740615 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:31:23.740626 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:31:23.740636 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:31:23.740647 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:31:23.740657 | orchestrator | 2025-09-17 00:31:23.740668 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-17 00:31:23.740679 | orchestrator | Wednesday 17 September 2025 00:31:11 +0000 (0:00:01.449) 0:00:11.011 *** 2025-09-17 00:31:23.740689 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 00:31:23.740700 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-17 00:31:23.740710 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 00:31:23.740721 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 00:31:23.740731 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-17 00:31:23.740742 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-17 00:31:23.740752 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-17 00:31:23.740763 | orchestrator | 2025-09-17 00:31:23.740773 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-17 00:31:23.740784 | orchestrator | Wednesday 17 September 2025 00:31:13 +0000 (0:00:01.928) 0:00:12.940 *** 2025-09-17 00:31:23.740794 | orchestrator | ok: [testbed-manager] 2025-09-17 00:31:23.740826 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:31:23.740837 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:31:23.740848 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:31:23.740858 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:31:23.740868 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:31:23.740879 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:31:23.740889 | orchestrator | 2025-09-17 00:31:23.740900 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-17 00:31:23.740928 | orchestrator | Wednesday 17 September 2025 00:31:14 +0000 (0:00:01.048) 0:00:13.988 *** 2025-09-17 00:31:23.740940 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:31:23.740951 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:31:23.740962 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:31:23.740981 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:31:23.740992 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:31:23.741003 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:31:23.741013 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:31:23.741024 | orchestrator | 2025-09-17 00:31:23.741035 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-17 00:31:23.741045 | orchestrator | Wednesday 17 September 2025 00:31:14 +0000 (0:00:00.640) 0:00:14.629 *** 2025-09-17 00:31:23.741056 | orchestrator | ok: [testbed-manager] 2025-09-17 00:31:23.741066 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:31:23.741077 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:31:23.741087 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:31:23.741098 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:31:23.741108 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:31:23.741119 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:31:23.741129 | orchestrator | 2025-09-17 00:31:23.741140 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-17 00:31:23.741151 | orchestrator | Wednesday 17 September 2025 00:31:16 +0000 (0:00:02.167) 0:00:16.797 *** 2025-09-17 00:31:23.741161 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:31:23.741172 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:31:23.741182 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:31:23.741193 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:31:23.741203 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:31:23.741214 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:31:23.741244 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-17 00:31:23.741256 | orchestrator | 2025-09-17 00:31:23.741267 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-17 00:31:23.741278 | orchestrator | Wednesday 17 September 2025 00:31:17 +0000 (0:00:00.868) 0:00:17.666 *** 2025-09-17 00:31:23.741288 | orchestrator | ok: [testbed-manager] 2025-09-17 00:31:23.741299 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:31:23.741309 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:31:23.741319 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:31:23.741330 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:31:23.741340 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:31:23.741351 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:31:23.741361 | orchestrator | 2025-09-17 00:31:23.741372 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-17 00:31:23.741382 | orchestrator | Wednesday 17 September 2025 00:31:19 +0000 (0:00:01.668) 0:00:19.334 *** 2025-09-17 00:31:23.741393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:31:23.741406 | orchestrator | 2025-09-17 00:31:23.741417 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-17 00:31:23.741428 | orchestrator | Wednesday 17 September 2025 00:31:20 +0000 (0:00:01.229) 0:00:20.564 *** 2025-09-17 00:31:23.741438 | orchestrator | ok: [testbed-manager] 2025-09-17 00:31:23.741449 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:31:23.741459 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:31:23.741470 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:31:23.741480 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:31:23.741491 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:31:23.741501 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:31:23.741512 | orchestrator | 2025-09-17 00:31:23.741522 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-17 00:31:23.741533 | orchestrator | Wednesday 17 September 2025 00:31:21 +0000 (0:00:00.992) 0:00:21.556 *** 2025-09-17 00:31:23.741544 | orchestrator | ok: [testbed-manager] 2025-09-17 00:31:23.741554 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:31:23.741565 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:31:23.741582 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:31:23.741593 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:31:23.741603 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:31:23.741614 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:31:23.741624 | orchestrator | 2025-09-17 00:31:23.741635 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-17 00:31:23.741646 | orchestrator | Wednesday 17 September 2025 00:31:22 +0000 (0:00:00.810) 0:00:22.367 *** 2025-09-17 00:31:23.741656 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 00:31:23.741667 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 00:31:23.741678 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 00:31:23.741688 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 00:31:23.741699 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 00:31:23.741709 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 00:31:23.741720 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 00:31:23.741730 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 00:31:23.741741 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 00:31:23.741751 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 00:31:23.741762 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 00:31:23.741772 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 00:31:23.741783 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 00:31:23.741794 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 00:31:23.741832 | orchestrator | 2025-09-17 00:31:23.741852 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-17 00:31:39.970850 | orchestrator | Wednesday 17 September 2025 00:31:23 +0000 (0:00:01.199) 0:00:23.567 *** 2025-09-17 00:31:39.970961 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:31:39.970978 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:31:39.970989 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:31:39.971000 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:31:39.971011 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:31:39.971022 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:31:39.971034 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:31:39.971046 | orchestrator | 2025-09-17 00:31:39.971058 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-17 00:31:39.971070 | orchestrator | Wednesday 17 September 2025 00:31:24 +0000 (0:00:00.644) 0:00:24.212 *** 2025-09-17 00:31:39.971082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-5, testbed-node-0, testbed-node-4, testbed-node-2, testbed-node-3 2025-09-17 00:31:39.971096 | orchestrator | 2025-09-17 00:31:39.971106 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-17 00:31:39.971117 | orchestrator | Wednesday 17 September 2025 00:31:28 +0000 (0:00:04.624) 0:00:28.836 *** 2025-09-17 00:31:39.971146 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971205 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971228 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971239 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971352 | orchestrator | 2025-09-17 00:31:39.971365 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-17 00:31:39.971378 | orchestrator | Wednesday 17 September 2025 00:31:34 +0000 (0:00:05.834) 0:00:34.670 *** 2025-09-17 00:31:39.971391 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-17 00:31:39.971494 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:39.971571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:45.607624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-17 00:31:45.607735 | orchestrator | 2025-09-17 00:31:45.607752 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-17 00:31:45.607766 | orchestrator | Wednesday 17 September 2025 00:31:39 +0000 (0:00:05.132) 0:00:39.802 *** 2025-09-17 00:31:45.607851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:31:45.607866 | orchestrator | 2025-09-17 00:31:45.607878 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-17 00:31:45.607889 | orchestrator | Wednesday 17 September 2025 00:31:41 +0000 (0:00:01.080) 0:00:40.883 *** 2025-09-17 00:31:45.607899 | orchestrator | ok: [testbed-manager] 2025-09-17 00:31:45.607911 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:31:45.607922 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:31:45.607932 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:31:45.607943 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:31:45.607953 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:31:45.607964 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:31:45.607975 | orchestrator | 2025-09-17 00:31:45.607985 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-17 00:31:45.607996 | orchestrator | Wednesday 17 September 2025 00:31:42 +0000 (0:00:00.995) 0:00:41.879 *** 2025-09-17 00:31:45.608007 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 00:31:45.608018 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 00:31:45.608029 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 00:31:45.608039 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 00:31:45.608050 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 00:31:45.608077 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 00:31:45.608089 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 00:31:45.608100 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 00:31:45.608110 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:31:45.608122 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 00:31:45.608132 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 00:31:45.608142 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 00:31:45.608153 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 00:31:45.608163 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:31:45.608174 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 00:31:45.608184 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 00:31:45.608195 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 00:31:45.608205 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 00:31:45.608216 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:31:45.608226 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 00:31:45.608237 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 00:31:45.608247 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 00:31:45.608258 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 00:31:45.608268 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:31:45.608279 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 00:31:45.608289 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 00:31:45.608299 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 00:31:45.608318 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 00:31:45.608330 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:31:45.608340 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:31:45.608351 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 00:31:45.608361 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 00:31:45.608372 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 00:31:45.608382 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 00:31:45.608393 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:31:45.608403 | orchestrator | 2025-09-17 00:31:45.608414 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-17 00:31:45.608442 | orchestrator | Wednesday 17 September 2025 00:31:44 +0000 (0:00:01.974) 0:00:43.854 *** 2025-09-17 00:31:45.608454 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:31:45.608465 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:31:45.608476 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:31:45.608486 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:31:45.608497 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:31:45.608507 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:31:45.608518 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:31:45.608528 | orchestrator | 2025-09-17 00:31:45.608539 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-17 00:31:45.608550 | orchestrator | Wednesday 17 September 2025 00:31:44 +0000 (0:00:00.610) 0:00:44.464 *** 2025-09-17 00:31:45.608561 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:31:45.608571 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:31:45.608582 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:31:45.608592 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:31:45.608603 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:31:45.608613 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:31:45.608623 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:31:45.608634 | orchestrator | 2025-09-17 00:31:45.608645 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:31:45.608662 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 00:31:45.608674 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:31:45.608685 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:31:45.608696 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:31:45.608706 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:31:45.608717 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:31:45.608728 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:31:45.608738 | orchestrator | 2025-09-17 00:31:45.608749 | orchestrator | 2025-09-17 00:31:45.608760 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:31:45.608770 | orchestrator | Wednesday 17 September 2025 00:31:45 +0000 (0:00:00.672) 0:00:45.137 *** 2025-09-17 00:31:45.608781 | orchestrator | =============================================================================== 2025-09-17 00:31:45.608797 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.83s 2025-09-17 00:31:45.608829 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.13s 2025-09-17 00:31:45.608841 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.62s 2025-09-17 00:31:45.608851 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.94s 2025-09-17 00:31:45.608862 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.22s 2025-09-17 00:31:45.608872 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.17s 2025-09-17 00:31:45.608883 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.97s 2025-09-17 00:31:45.608893 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.93s 2025-09-17 00:31:45.608904 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2025-09-17 00:31:45.608915 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.61s 2025-09-17 00:31:45.608925 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.45s 2025-09-17 00:31:45.608935 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.23s 2025-09-17 00:31:45.608946 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.20s 2025-09-17 00:31:45.608957 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.08s 2025-09-17 00:31:45.608967 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.06s 2025-09-17 00:31:45.608978 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.05s 2025-09-17 00:31:45.608989 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2025-09-17 00:31:45.608999 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-09-17 00:31:45.609010 | orchestrator | osism.commons.network : Create required directories --------------------- 0.89s 2025-09-17 00:31:45.609020 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.87s 2025-09-17 00:31:45.849347 | orchestrator | + osism apply wireguard 2025-09-17 00:31:57.823394 | orchestrator | 2025-09-17 00:31:57 | INFO  | Task b079bea2-66d4-41c1-88e9-3f3e131f946b (wireguard) was prepared for execution. 2025-09-17 00:31:57.823508 | orchestrator | 2025-09-17 00:31:57 | INFO  | It takes a moment until task b079bea2-66d4-41c1-88e9-3f3e131f946b (wireguard) has been started and output is visible here. 2025-09-17 00:32:16.842246 | orchestrator | 2025-09-17 00:32:16.842366 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-17 00:32:16.842381 | orchestrator | 2025-09-17 00:32:16.842392 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-17 00:32:16.842403 | orchestrator | Wednesday 17 September 2025 00:32:01 +0000 (0:00:00.222) 0:00:00.222 *** 2025-09-17 00:32:16.842413 | orchestrator | ok: [testbed-manager] 2025-09-17 00:32:16.842424 | orchestrator | 2025-09-17 00:32:16.842434 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-17 00:32:16.842444 | orchestrator | Wednesday 17 September 2025 00:32:03 +0000 (0:00:01.532) 0:00:01.754 *** 2025-09-17 00:32:16.842453 | orchestrator | changed: [testbed-manager] 2025-09-17 00:32:16.842463 | orchestrator | 2025-09-17 00:32:16.842473 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-17 00:32:16.842482 | orchestrator | Wednesday 17 September 2025 00:32:09 +0000 (0:00:06.048) 0:00:07.803 *** 2025-09-17 00:32:16.842492 | orchestrator | changed: [testbed-manager] 2025-09-17 00:32:16.842501 | orchestrator | 2025-09-17 00:32:16.842511 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-17 00:32:16.842521 | orchestrator | Wednesday 17 September 2025 00:32:09 +0000 (0:00:00.539) 0:00:08.343 *** 2025-09-17 00:32:16.842546 | orchestrator | changed: [testbed-manager] 2025-09-17 00:32:16.842577 | orchestrator | 2025-09-17 00:32:16.842587 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-17 00:32:16.842598 | orchestrator | Wednesday 17 September 2025 00:32:10 +0000 (0:00:00.402) 0:00:08.746 *** 2025-09-17 00:32:16.842607 | orchestrator | ok: [testbed-manager] 2025-09-17 00:32:16.842617 | orchestrator | 2025-09-17 00:32:16.842626 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-17 00:32:16.842636 | orchestrator | Wednesday 17 September 2025 00:32:10 +0000 (0:00:00.512) 0:00:09.258 *** 2025-09-17 00:32:16.842645 | orchestrator | ok: [testbed-manager] 2025-09-17 00:32:16.842655 | orchestrator | 2025-09-17 00:32:16.842664 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-17 00:32:16.842674 | orchestrator | Wednesday 17 September 2025 00:32:11 +0000 (0:00:00.513) 0:00:09.771 *** 2025-09-17 00:32:16.842683 | orchestrator | ok: [testbed-manager] 2025-09-17 00:32:16.842692 | orchestrator | 2025-09-17 00:32:16.842702 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-17 00:32:16.842711 | orchestrator | Wednesday 17 September 2025 00:32:11 +0000 (0:00:00.404) 0:00:10.175 *** 2025-09-17 00:32:16.842721 | orchestrator | changed: [testbed-manager] 2025-09-17 00:32:16.842730 | orchestrator | 2025-09-17 00:32:16.842739 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-17 00:32:16.842749 | orchestrator | Wednesday 17 September 2025 00:32:12 +0000 (0:00:01.198) 0:00:11.373 *** 2025-09-17 00:32:16.842758 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 00:32:16.842768 | orchestrator | changed: [testbed-manager] 2025-09-17 00:32:16.842777 | orchestrator | 2025-09-17 00:32:16.842788 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-17 00:32:16.842800 | orchestrator | Wednesday 17 September 2025 00:32:13 +0000 (0:00:00.924) 0:00:12.298 *** 2025-09-17 00:32:16.842835 | orchestrator | changed: [testbed-manager] 2025-09-17 00:32:16.842846 | orchestrator | 2025-09-17 00:32:16.842856 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-17 00:32:16.842867 | orchestrator | Wednesday 17 September 2025 00:32:15 +0000 (0:00:01.678) 0:00:13.977 *** 2025-09-17 00:32:16.842878 | orchestrator | changed: [testbed-manager] 2025-09-17 00:32:16.842889 | orchestrator | 2025-09-17 00:32:16.842900 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:32:16.842911 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:32:16.842923 | orchestrator | 2025-09-17 00:32:16.842933 | orchestrator | 2025-09-17 00:32:16.842945 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:32:16.842956 | orchestrator | Wednesday 17 September 2025 00:32:16 +0000 (0:00:00.926) 0:00:14.903 *** 2025-09-17 00:32:16.842966 | orchestrator | =============================================================================== 2025-09-17 00:32:16.842977 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.05s 2025-09-17 00:32:16.842987 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.68s 2025-09-17 00:32:16.842998 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.53s 2025-09-17 00:32:16.843009 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2025-09-17 00:32:16.843020 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2025-09-17 00:32:16.843031 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2025-09-17 00:32:16.843042 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-09-17 00:32:16.843052 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.51s 2025-09-17 00:32:16.843063 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-09-17 00:32:16.843074 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-09-17 00:32:16.843093 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2025-09-17 00:32:17.122319 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-17 00:32:17.159448 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-17 00:32:17.159488 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-17 00:32:17.244963 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 175 0 --:--:-- --:--:-- --:--:-- 176 2025-09-17 00:32:17.253402 | orchestrator | + osism apply --environment custom workarounds 2025-09-17 00:32:19.113721 | orchestrator | 2025-09-17 00:32:19 | INFO  | Trying to run play workarounds in environment custom 2025-09-17 00:32:29.298763 | orchestrator | 2025-09-17 00:32:29 | INFO  | Task 75de2a65-c963-4b5e-92aa-25ec8cd5a8e5 (workarounds) was prepared for execution. 2025-09-17 00:32:29.298935 | orchestrator | 2025-09-17 00:32:29 | INFO  | It takes a moment until task 75de2a65-c963-4b5e-92aa-25ec8cd5a8e5 (workarounds) has been started and output is visible here. 2025-09-17 00:32:53.667850 | orchestrator | 2025-09-17 00:32:53.667958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:32:53.667972 | orchestrator | 2025-09-17 00:32:53.667981 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-17 00:32:53.667991 | orchestrator | Wednesday 17 September 2025 00:32:32 +0000 (0:00:00.145) 0:00:00.145 *** 2025-09-17 00:32:53.668000 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-17 00:32:53.668009 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-17 00:32:53.668025 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-17 00:32:53.668034 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-17 00:32:53.668043 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-17 00:32:53.668051 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-17 00:32:53.668060 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-17 00:32:53.668069 | orchestrator | 2025-09-17 00:32:53.668077 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-17 00:32:53.668086 | orchestrator | 2025-09-17 00:32:53.668094 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-17 00:32:53.668103 | orchestrator | Wednesday 17 September 2025 00:32:33 +0000 (0:00:00.771) 0:00:00.916 *** 2025-09-17 00:32:53.668112 | orchestrator | ok: [testbed-manager] 2025-09-17 00:32:53.668122 | orchestrator | 2025-09-17 00:32:53.668130 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-17 00:32:53.668139 | orchestrator | 2025-09-17 00:32:53.668147 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-17 00:32:53.668156 | orchestrator | Wednesday 17 September 2025 00:32:35 +0000 (0:00:02.141) 0:00:03.057 *** 2025-09-17 00:32:53.668165 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:32:53.668173 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:32:53.668182 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:32:53.668190 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:32:53.668199 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:32:53.668207 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:32:53.668216 | orchestrator | 2025-09-17 00:32:53.668226 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-17 00:32:53.668234 | orchestrator | 2025-09-17 00:32:53.668243 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-17 00:32:53.668251 | orchestrator | Wednesday 17 September 2025 00:32:37 +0000 (0:00:01.898) 0:00:04.956 *** 2025-09-17 00:32:53.668261 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 00:32:53.668270 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 00:32:53.668295 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 00:32:53.668304 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 00:32:53.668312 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 00:32:53.668321 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 00:32:53.668330 | orchestrator | 2025-09-17 00:32:53.668338 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-17 00:32:53.668347 | orchestrator | Wednesday 17 September 2025 00:32:39 +0000 (0:00:01.467) 0:00:06.424 *** 2025-09-17 00:32:53.668357 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:32:53.668368 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:32:53.668378 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:32:53.668387 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:32:53.668397 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:32:53.668407 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:32:53.668416 | orchestrator | 2025-09-17 00:32:53.668426 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-17 00:32:53.668436 | orchestrator | Wednesday 17 September 2025 00:32:43 +0000 (0:00:03.823) 0:00:10.248 *** 2025-09-17 00:32:53.668446 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:32:53.668456 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:32:53.668466 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:32:53.668476 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:32:53.668486 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:32:53.668495 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:32:53.668505 | orchestrator | 2025-09-17 00:32:53.668515 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-17 00:32:53.668525 | orchestrator | 2025-09-17 00:32:53.668535 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-17 00:32:53.668545 | orchestrator | Wednesday 17 September 2025 00:32:43 +0000 (0:00:00.667) 0:00:10.916 *** 2025-09-17 00:32:53.668554 | orchestrator | changed: [testbed-manager] 2025-09-17 00:32:53.668564 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:32:53.668574 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:32:53.668583 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:32:53.668593 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:32:53.668603 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:32:53.668613 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:32:53.668623 | orchestrator | 2025-09-17 00:32:53.668633 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-17 00:32:53.668643 | orchestrator | Wednesday 17 September 2025 00:32:45 +0000 (0:00:01.794) 0:00:12.710 *** 2025-09-17 00:32:53.668653 | orchestrator | changed: [testbed-manager] 2025-09-17 00:32:53.668663 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:32:53.668672 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:32:53.668682 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:32:53.668692 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:32:53.668702 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:32:53.668725 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:32:53.668734 | orchestrator | 2025-09-17 00:32:53.668743 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-17 00:32:53.668752 | orchestrator | Wednesday 17 September 2025 00:32:47 +0000 (0:00:01.611) 0:00:14.321 *** 2025-09-17 00:32:53.668761 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:32:53.668769 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:32:53.668778 | orchestrator | ok: [testbed-manager] 2025-09-17 00:32:53.668786 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:32:53.668795 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:32:53.668809 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:32:53.668835 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:32:53.668844 | orchestrator | 2025-09-17 00:32:53.668856 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-17 00:32:53.668865 | orchestrator | Wednesday 17 September 2025 00:32:48 +0000 (0:00:01.477) 0:00:15.798 *** 2025-09-17 00:32:53.668874 | orchestrator | changed: [testbed-manager] 2025-09-17 00:32:53.668882 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:32:53.668890 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:32:53.668899 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:32:53.668907 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:32:53.668915 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:32:53.668924 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:32:53.668932 | orchestrator | 2025-09-17 00:32:53.668940 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-17 00:32:53.668949 | orchestrator | Wednesday 17 September 2025 00:32:50 +0000 (0:00:01.721) 0:00:17.520 *** 2025-09-17 00:32:53.668957 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:32:53.668966 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:32:53.668974 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:32:53.668982 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:32:53.668991 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:32:53.668999 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:32:53.669007 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:32:53.669016 | orchestrator | 2025-09-17 00:32:53.669024 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-17 00:32:53.669033 | orchestrator | 2025-09-17 00:32:53.669041 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-17 00:32:53.669050 | orchestrator | Wednesday 17 September 2025 00:32:50 +0000 (0:00:00.617) 0:00:18.138 *** 2025-09-17 00:32:53.669058 | orchestrator | ok: [testbed-manager] 2025-09-17 00:32:53.669066 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:32:53.669075 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:32:53.669083 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:32:53.669092 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:32:53.669100 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:32:53.669108 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:32:53.669117 | orchestrator | 2025-09-17 00:32:53.669125 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:32:53.669134 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:32:53.669144 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:32:53.669153 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:32:53.669161 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:32:53.669170 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:32:53.669178 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:32:53.669187 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:32:53.669195 | orchestrator | 2025-09-17 00:32:53.669204 | orchestrator | 2025-09-17 00:32:53.669212 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:32:53.669221 | orchestrator | Wednesday 17 September 2025 00:32:53 +0000 (0:00:02.706) 0:00:20.844 *** 2025-09-17 00:32:53.669234 | orchestrator | =============================================================================== 2025-09-17 00:32:53.669243 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.82s 2025-09-17 00:32:53.669251 | orchestrator | Install python3-docker -------------------------------------------------- 2.71s 2025-09-17 00:32:53.669259 | orchestrator | Apply netplan configuration --------------------------------------------- 2.14s 2025-09-17 00:32:53.669268 | orchestrator | Apply netplan configuration --------------------------------------------- 1.90s 2025-09-17 00:32:53.669276 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.79s 2025-09-17 00:32:53.669285 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.72s 2025-09-17 00:32:53.669293 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.61s 2025-09-17 00:32:53.669301 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.48s 2025-09-17 00:32:53.669310 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2025-09-17 00:32:53.669318 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-09-17 00:32:53.669327 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.67s 2025-09-17 00:32:53.669340 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2025-09-17 00:32:54.233334 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-17 00:33:06.253656 | orchestrator | 2025-09-17 00:33:06 | INFO  | Task c064156d-952c-4fab-b3e3-003d5db283da (reboot) was prepared for execution. 2025-09-17 00:33:06.253808 | orchestrator | 2025-09-17 00:33:06 | INFO  | It takes a moment until task c064156d-952c-4fab-b3e3-003d5db283da (reboot) has been started and output is visible here. 2025-09-17 00:33:16.097880 | orchestrator | 2025-09-17 00:33:16.098006 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 00:33:16.098090 | orchestrator | 2025-09-17 00:33:16.098103 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 00:33:16.098115 | orchestrator | Wednesday 17 September 2025 00:33:10 +0000 (0:00:00.203) 0:00:00.203 *** 2025-09-17 00:33:16.098127 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:33:16.098139 | orchestrator | 2025-09-17 00:33:16.098150 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 00:33:16.098161 | orchestrator | Wednesday 17 September 2025 00:33:10 +0000 (0:00:00.102) 0:00:00.305 *** 2025-09-17 00:33:16.098172 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:33:16.098183 | orchestrator | 2025-09-17 00:33:16.098193 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 00:33:16.098204 | orchestrator | Wednesday 17 September 2025 00:33:11 +0000 (0:00:00.922) 0:00:01.228 *** 2025-09-17 00:33:16.098215 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:33:16.098226 | orchestrator | 2025-09-17 00:33:16.098237 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 00:33:16.098248 | orchestrator | 2025-09-17 00:33:16.098259 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 00:33:16.098270 | orchestrator | Wednesday 17 September 2025 00:33:11 +0000 (0:00:00.120) 0:00:01.348 *** 2025-09-17 00:33:16.098281 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:33:16.098291 | orchestrator | 2025-09-17 00:33:16.098302 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 00:33:16.098313 | orchestrator | Wednesday 17 September 2025 00:33:11 +0000 (0:00:00.113) 0:00:01.461 *** 2025-09-17 00:33:16.098323 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:33:16.098334 | orchestrator | 2025-09-17 00:33:16.098345 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 00:33:16.098356 | orchestrator | Wednesday 17 September 2025 00:33:12 +0000 (0:00:00.673) 0:00:02.134 *** 2025-09-17 00:33:16.098370 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:33:16.098382 | orchestrator | 2025-09-17 00:33:16.098415 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 00:33:16.098428 | orchestrator | 2025-09-17 00:33:16.098441 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 00:33:16.098454 | orchestrator | Wednesday 17 September 2025 00:33:12 +0000 (0:00:00.109) 0:00:02.244 *** 2025-09-17 00:33:16.098465 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:33:16.098478 | orchestrator | 2025-09-17 00:33:16.098490 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 00:33:16.098503 | orchestrator | Wednesday 17 September 2025 00:33:12 +0000 (0:00:00.208) 0:00:02.453 *** 2025-09-17 00:33:16.098515 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:33:16.098527 | orchestrator | 2025-09-17 00:33:16.098539 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 00:33:16.098551 | orchestrator | Wednesday 17 September 2025 00:33:13 +0000 (0:00:00.659) 0:00:03.112 *** 2025-09-17 00:33:16.098564 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:33:16.098576 | orchestrator | 2025-09-17 00:33:16.098589 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 00:33:16.098602 | orchestrator | 2025-09-17 00:33:16.098614 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 00:33:16.098626 | orchestrator | Wednesday 17 September 2025 00:33:13 +0000 (0:00:00.133) 0:00:03.245 *** 2025-09-17 00:33:16.098639 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:33:16.098652 | orchestrator | 2025-09-17 00:33:16.098664 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 00:33:16.098677 | orchestrator | Wednesday 17 September 2025 00:33:13 +0000 (0:00:00.099) 0:00:03.344 *** 2025-09-17 00:33:16.098690 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:33:16.098702 | orchestrator | 2025-09-17 00:33:16.098715 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 00:33:16.098727 | orchestrator | Wednesday 17 September 2025 00:33:13 +0000 (0:00:00.669) 0:00:04.014 *** 2025-09-17 00:33:16.098740 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:33:16.098752 | orchestrator | 2025-09-17 00:33:16.098763 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 00:33:16.098774 | orchestrator | 2025-09-17 00:33:16.098784 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 00:33:16.098795 | orchestrator | Wednesday 17 September 2025 00:33:14 +0000 (0:00:00.126) 0:00:04.141 *** 2025-09-17 00:33:16.098805 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:33:16.098834 | orchestrator | 2025-09-17 00:33:16.098846 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 00:33:16.098857 | orchestrator | Wednesday 17 September 2025 00:33:14 +0000 (0:00:00.105) 0:00:04.246 *** 2025-09-17 00:33:16.098867 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:33:16.098878 | orchestrator | 2025-09-17 00:33:16.098888 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 00:33:16.098899 | orchestrator | Wednesday 17 September 2025 00:33:14 +0000 (0:00:00.660) 0:00:04.907 *** 2025-09-17 00:33:16.098910 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:33:16.098920 | orchestrator | 2025-09-17 00:33:16.098931 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 00:33:16.098942 | orchestrator | 2025-09-17 00:33:16.098952 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 00:33:16.098963 | orchestrator | Wednesday 17 September 2025 00:33:14 +0000 (0:00:00.109) 0:00:05.016 *** 2025-09-17 00:33:16.098974 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:33:16.098984 | orchestrator | 2025-09-17 00:33:16.098995 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 00:33:16.099005 | orchestrator | Wednesday 17 September 2025 00:33:15 +0000 (0:00:00.098) 0:00:05.114 *** 2025-09-17 00:33:16.099016 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:33:16.099026 | orchestrator | 2025-09-17 00:33:16.099037 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 00:33:16.099056 | orchestrator | Wednesday 17 September 2025 00:33:15 +0000 (0:00:00.665) 0:00:05.780 *** 2025-09-17 00:33:16.099085 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:33:16.099097 | orchestrator | 2025-09-17 00:33:16.099108 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:33:16.099120 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:33:16.099133 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:33:16.099144 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:33:16.099155 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:33:16.099165 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:33:16.099176 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:33:16.099186 | orchestrator | 2025-09-17 00:33:16.099197 | orchestrator | 2025-09-17 00:33:16.099208 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:33:16.099219 | orchestrator | Wednesday 17 September 2025 00:33:15 +0000 (0:00:00.033) 0:00:05.814 *** 2025-09-17 00:33:16.099230 | orchestrator | =============================================================================== 2025-09-17 00:33:16.099240 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.25s 2025-09-17 00:33:16.099256 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.73s 2025-09-17 00:33:16.099267 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-09-17 00:33:16.362256 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-17 00:33:28.419317 | orchestrator | 2025-09-17 00:33:28 | INFO  | Task c39e7fbe-b008-4c27-9e43-f9f1a736ef2b (wait-for-connection) was prepared for execution. 2025-09-17 00:33:28.419457 | orchestrator | 2025-09-17 00:33:28 | INFO  | It takes a moment until task c39e7fbe-b008-4c27-9e43-f9f1a736ef2b (wait-for-connection) has been started and output is visible here. 2025-09-17 00:33:44.187271 | orchestrator | 2025-09-17 00:33:44.187393 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-17 00:33:44.187410 | orchestrator | 2025-09-17 00:33:44.187422 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-17 00:33:44.187434 | orchestrator | Wednesday 17 September 2025 00:33:32 +0000 (0:00:00.179) 0:00:00.179 *** 2025-09-17 00:33:44.187445 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:33:44.187457 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:33:44.187469 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:33:44.187480 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:33:44.187491 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:33:44.187503 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:33:44.187513 | orchestrator | 2025-09-17 00:33:44.187525 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:33:44.187537 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:33:44.187551 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:33:44.187562 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:33:44.187602 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:33:44.187644 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:33:44.187664 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:33:44.187682 | orchestrator | 2025-09-17 00:33:44.187700 | orchestrator | 2025-09-17 00:33:44.187718 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:33:44.187736 | orchestrator | Wednesday 17 September 2025 00:33:43 +0000 (0:00:11.628) 0:00:11.808 *** 2025-09-17 00:33:44.187757 | orchestrator | =============================================================================== 2025-09-17 00:33:44.187776 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.63s 2025-09-17 00:33:44.458976 | orchestrator | + osism apply hddtemp 2025-09-17 00:33:56.485469 | orchestrator | 2025-09-17 00:33:56 | INFO  | Task c30672e7-38b4-463a-8342-d3686a71b3aa (hddtemp) was prepared for execution. 2025-09-17 00:33:56.485579 | orchestrator | 2025-09-17 00:33:56 | INFO  | It takes a moment until task c30672e7-38b4-463a-8342-d3686a71b3aa (hddtemp) has been started and output is visible here. 2025-09-17 00:34:23.670314 | orchestrator | 2025-09-17 00:34:23.670428 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-17 00:34:23.670443 | orchestrator | 2025-09-17 00:34:23.670468 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-17 00:34:23.670479 | orchestrator | Wednesday 17 September 2025 00:34:00 +0000 (0:00:00.253) 0:00:00.253 *** 2025-09-17 00:34:23.670489 | orchestrator | ok: [testbed-manager] 2025-09-17 00:34:23.670499 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:34:23.670509 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:34:23.670519 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:34:23.670528 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:34:23.670537 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:34:23.670547 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:34:23.670556 | orchestrator | 2025-09-17 00:34:23.670566 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-17 00:34:23.670575 | orchestrator | Wednesday 17 September 2025 00:34:01 +0000 (0:00:00.669) 0:00:00.923 *** 2025-09-17 00:34:23.670587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:34:23.670599 | orchestrator | 2025-09-17 00:34:23.670609 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-17 00:34:23.670618 | orchestrator | Wednesday 17 September 2025 00:34:02 +0000 (0:00:01.179) 0:00:02.102 *** 2025-09-17 00:34:23.670628 | orchestrator | ok: [testbed-manager] 2025-09-17 00:34:23.670637 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:34:23.670646 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:34:23.670656 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:34:23.670665 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:34:23.670674 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:34:23.670683 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:34:23.670693 | orchestrator | 2025-09-17 00:34:23.670702 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-17 00:34:23.670712 | orchestrator | Wednesday 17 September 2025 00:34:04 +0000 (0:00:01.870) 0:00:03.973 *** 2025-09-17 00:34:23.670721 | orchestrator | changed: [testbed-manager] 2025-09-17 00:34:23.670731 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:34:23.670741 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:34:23.670750 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:34:23.670759 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:34:23.670788 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:34:23.670798 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:34:23.670807 | orchestrator | 2025-09-17 00:34:23.670817 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-17 00:34:23.670854 | orchestrator | Wednesday 17 September 2025 00:34:05 +0000 (0:00:01.101) 0:00:05.075 *** 2025-09-17 00:34:23.670865 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:34:23.670875 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:34:23.670886 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:34:23.670897 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:34:23.670908 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:34:23.670919 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:34:23.670930 | orchestrator | ok: [testbed-manager] 2025-09-17 00:34:23.670940 | orchestrator | 2025-09-17 00:34:23.670951 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-17 00:34:23.670962 | orchestrator | Wednesday 17 September 2025 00:34:06 +0000 (0:00:01.095) 0:00:06.171 *** 2025-09-17 00:34:23.670973 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:34:23.670984 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:34:23.670995 | orchestrator | changed: [testbed-manager] 2025-09-17 00:34:23.671005 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:34:23.671017 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:34:23.671028 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:34:23.671038 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:34:23.671049 | orchestrator | 2025-09-17 00:34:23.671060 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-17 00:34:23.671070 | orchestrator | Wednesday 17 September 2025 00:34:07 +0000 (0:00:00.788) 0:00:06.959 *** 2025-09-17 00:34:23.671081 | orchestrator | changed: [testbed-manager] 2025-09-17 00:34:23.671092 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:34:23.671103 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:34:23.671113 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:34:23.671123 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:34:23.671134 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:34:23.671145 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:34:23.671155 | orchestrator | 2025-09-17 00:34:23.671166 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-17 00:34:23.671178 | orchestrator | Wednesday 17 September 2025 00:34:20 +0000 (0:00:12.939) 0:00:19.899 *** 2025-09-17 00:34:23.671189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:34:23.671201 | orchestrator | 2025-09-17 00:34:23.671211 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-17 00:34:23.671220 | orchestrator | Wednesday 17 September 2025 00:34:21 +0000 (0:00:01.323) 0:00:21.222 *** 2025-09-17 00:34:23.671230 | orchestrator | changed: [testbed-manager] 2025-09-17 00:34:23.671239 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:34:23.671248 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:34:23.671257 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:34:23.671267 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:34:23.671276 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:34:23.671285 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:34:23.671294 | orchestrator | 2025-09-17 00:34:23.671304 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:34:23.671313 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:34:23.671340 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:34:23.671357 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:34:23.671375 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:34:23.671385 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:34:23.671395 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:34:23.671405 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:34:23.671415 | orchestrator | 2025-09-17 00:34:23.671425 | orchestrator | 2025-09-17 00:34:23.671434 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:34:23.671444 | orchestrator | Wednesday 17 September 2025 00:34:23 +0000 (0:00:01.886) 0:00:23.109 *** 2025-09-17 00:34:23.671454 | orchestrator | =============================================================================== 2025-09-17 00:34:23.671463 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.94s 2025-09-17 00:34:23.671473 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.89s 2025-09-17 00:34:23.671483 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.87s 2025-09-17 00:34:23.671492 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.32s 2025-09-17 00:34:23.671502 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2025-09-17 00:34:23.671512 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.10s 2025-09-17 00:34:23.671521 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.10s 2025-09-17 00:34:23.671531 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.79s 2025-09-17 00:34:23.671541 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.67s 2025-09-17 00:34:23.945299 | orchestrator | ++ semver latest 7.1.1 2025-09-17 00:34:24.008610 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-17 00:34:24.008695 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-17 00:34:24.008710 | orchestrator | + sudo systemctl restart manager.service 2025-09-17 00:35:02.025788 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-17 00:35:02.025963 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-17 00:35:02.025982 | orchestrator | + local max_attempts=60 2025-09-17 00:35:02.025996 | orchestrator | + local name=ceph-ansible 2025-09-17 00:35:02.026008 | orchestrator | + local attempt_num=1 2025-09-17 00:35:02.026084 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:02.064824 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:02.064914 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:02.064926 | orchestrator | + sleep 5 2025-09-17 00:35:07.069394 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:07.104548 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:07.104610 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:07.104624 | orchestrator | + sleep 5 2025-09-17 00:35:12.110821 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:12.150501 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:12.150576 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:12.150589 | orchestrator | + sleep 5 2025-09-17 00:35:17.154681 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:17.198669 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:17.198720 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:17.198733 | orchestrator | + sleep 5 2025-09-17 00:35:22.203732 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:22.245306 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:22.245376 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:22.245414 | orchestrator | + sleep 5 2025-09-17 00:35:27.250005 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:27.293630 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:27.293711 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:27.293727 | orchestrator | + sleep 5 2025-09-17 00:35:32.299210 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:32.339449 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:32.339503 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:32.339534 | orchestrator | + sleep 5 2025-09-17 00:35:37.345718 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:37.385785 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:37.385887 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:37.385903 | orchestrator | + sleep 5 2025-09-17 00:35:42.389346 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:42.418136 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:42.418193 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:42.418202 | orchestrator | + sleep 5 2025-09-17 00:35:47.422588 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:47.459394 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:47.459447 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:47.459456 | orchestrator | + sleep 5 2025-09-17 00:35:52.466405 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:52.507581 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:52.507648 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:52.507662 | orchestrator | + sleep 5 2025-09-17 00:35:57.513395 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:35:57.555812 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 00:35:57.555930 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:35:57.555946 | orchestrator | + sleep 5 2025-09-17 00:36:02.561463 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:36:02.598297 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 00:36:02.598360 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 00:36:02.598375 | orchestrator | + sleep 5 2025-09-17 00:36:07.603698 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 00:36:07.641815 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:36:07.641889 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-17 00:36:07.641904 | orchestrator | + local max_attempts=60 2025-09-17 00:36:07.641919 | orchestrator | + local name=kolla-ansible 2025-09-17 00:36:07.641930 | orchestrator | + local attempt_num=1 2025-09-17 00:36:07.643640 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-17 00:36:07.675390 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:36:07.675414 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-17 00:36:07.675426 | orchestrator | + local max_attempts=60 2025-09-17 00:36:07.675437 | orchestrator | + local name=osism-ansible 2025-09-17 00:36:07.675448 | orchestrator | + local attempt_num=1 2025-09-17 00:36:07.676378 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-17 00:36:07.719570 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 00:36:07.719649 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-17 00:36:07.719662 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-17 00:36:07.892070 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-17 00:36:08.047310 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-17 00:36:08.357984 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-17 00:36:08.358515 | orchestrator | + osism apply gather-facts 2025-09-17 00:36:20.360422 | orchestrator | 2025-09-17 00:36:20 | INFO  | Task 6ab93801-1b76-4a32-b102-7f645df4f34c (gather-facts) was prepared for execution. 2025-09-17 00:36:20.360535 | orchestrator | 2025-09-17 00:36:20 | INFO  | It takes a moment until task 6ab93801-1b76-4a32-b102-7f645df4f34c (gather-facts) has been started and output is visible here. 2025-09-17 00:36:33.103160 | orchestrator | 2025-09-17 00:36:33.103280 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 00:36:33.103297 | orchestrator | 2025-09-17 00:36:33.103334 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 00:36:33.103346 | orchestrator | Wednesday 17 September 2025 00:36:23 +0000 (0:00:00.163) 0:00:00.163 *** 2025-09-17 00:36:33.103357 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:36:33.103369 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:36:33.103380 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:36:33.103391 | orchestrator | ok: [testbed-manager] 2025-09-17 00:36:33.103401 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:36:33.103412 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:36:33.103422 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:36:33.103433 | orchestrator | 2025-09-17 00:36:33.103444 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-17 00:36:33.103455 | orchestrator | 2025-09-17 00:36:33.103466 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-17 00:36:33.103476 | orchestrator | Wednesday 17 September 2025 00:36:32 +0000 (0:00:08.393) 0:00:08.557 *** 2025-09-17 00:36:33.103487 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:36:33.103499 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:36:33.103510 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:36:33.103520 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:36:33.103531 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:36:33.103541 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:36:33.103552 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:36:33.103563 | orchestrator | 2025-09-17 00:36:33.103574 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:36:33.103585 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:36:33.103597 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:36:33.103608 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:36:33.103618 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:36:33.103629 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:36:33.103640 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:36:33.103650 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:36:33.103661 | orchestrator | 2025-09-17 00:36:33.103672 | orchestrator | 2025-09-17 00:36:33.103683 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:36:33.103694 | orchestrator | Wednesday 17 September 2025 00:36:32 +0000 (0:00:00.536) 0:00:09.094 *** 2025-09-17 00:36:33.103707 | orchestrator | =============================================================================== 2025-09-17 00:36:33.103719 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.39s 2025-09-17 00:36:33.103732 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-09-17 00:36:33.357454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-17 00:36:33.370384 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-17 00:36:33.389891 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-17 00:36:33.410234 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-17 00:36:33.428555 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-17 00:36:33.448638 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-17 00:36:33.464519 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-17 00:36:33.477326 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-17 00:36:33.489418 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-17 00:36:33.508981 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-17 00:36:33.523154 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-17 00:36:33.540696 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-17 00:36:33.565138 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-17 00:36:33.578869 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-17 00:36:33.590293 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-17 00:36:33.611480 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-17 00:36:33.631336 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-17 00:36:33.649228 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-17 00:36:33.666342 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-17 00:36:33.681882 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-17 00:36:33.698132 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-17 00:36:33.827804 | orchestrator | ok: Runtime: 0:23:10.164419 2025-09-17 00:36:33.922584 | 2025-09-17 00:36:33.922743 | TASK [Deploy services] 2025-09-17 00:36:34.454154 | orchestrator | skipping: Conditional result was False 2025-09-17 00:36:34.472795 | 2025-09-17 00:36:34.472969 | TASK [Deploy in a nutshell] 2025-09-17 00:36:35.134896 | orchestrator | + set -e 2025-09-17 00:36:35.135027 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-17 00:36:35.135037 | orchestrator | ++ export INTERACTIVE=false 2025-09-17 00:36:35.135046 | orchestrator | ++ INTERACTIVE=false 2025-09-17 00:36:35.135051 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-17 00:36:35.135056 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-17 00:36:35.135061 | orchestrator | + source /opt/manager-vars.sh 2025-09-17 00:36:35.135084 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-17 00:36:35.135096 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-17 00:36:35.135101 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-17 00:36:35.135107 | orchestrator | ++ CEPH_VERSION=reef 2025-09-17 00:36:35.135112 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-17 00:36:35.135119 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-17 00:36:35.135123 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-17 00:36:35.135140 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-17 00:36:35.135144 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-17 00:36:35.135151 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-17 00:36:35.135155 | orchestrator | ++ export ARA=false 2025-09-17 00:36:35.135158 | orchestrator | ++ ARA=false 2025-09-17 00:36:35.135162 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-17 00:36:35.135168 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-17 00:36:35.135172 | orchestrator | ++ export TEMPEST=true 2025-09-17 00:36:35.135176 | orchestrator | ++ TEMPEST=true 2025-09-17 00:36:35.135179 | orchestrator | ++ export IS_ZUUL=true 2025-09-17 00:36:35.135183 | orchestrator | ++ IS_ZUUL=true 2025-09-17 00:36:35.135187 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-09-17 00:36:35.135191 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-09-17 00:36:35.135194 | orchestrator | ++ export EXTERNAL_API=false 2025-09-17 00:36:35.135198 | orchestrator | ++ EXTERNAL_API=false 2025-09-17 00:36:35.135202 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-17 00:36:35.135205 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-17 00:36:35.135209 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-17 00:36:35.135213 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-17 00:36:35.135216 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-17 00:36:35.135220 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-17 00:36:35.135224 | orchestrator | + echo 2025-09-17 00:36:35.135228 | orchestrator | 2025-09-17 00:36:35.135232 | orchestrator | # PULL IMAGES 2025-09-17 00:36:35.135236 | orchestrator | 2025-09-17 00:36:35.135239 | orchestrator | + echo '# PULL IMAGES' 2025-09-17 00:36:35.135243 | orchestrator | + echo 2025-09-17 00:36:35.135488 | orchestrator | ++ semver latest 7.0.0 2025-09-17 00:36:35.170806 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-17 00:36:35.171052 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-17 00:36:35.171087 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-17 00:36:36.973966 | orchestrator | 2025-09-17 00:36:36 | INFO  | Trying to run play pull-images in environment custom 2025-09-17 00:36:47.117573 | orchestrator | 2025-09-17 00:36:47 | INFO  | Task 317b99ac-39d4-4b73-a916-b3b9bf128809 (pull-images) was prepared for execution. 2025-09-17 00:36:47.117688 | orchestrator | 2025-09-17 00:36:47 | INFO  | Task 317b99ac-39d4-4b73-a916-b3b9bf128809 is running in background. No more output. Check ARA for logs. 2025-09-17 00:36:49.273161 | orchestrator | 2025-09-17 00:36:49 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-17 00:36:59.454982 | orchestrator | 2025-09-17 00:36:59 | INFO  | Task 752bea8a-37f6-4fb3-a307-969d1114b2a0 (wipe-partitions) was prepared for execution. 2025-09-17 00:36:59.455104 | orchestrator | 2025-09-17 00:36:59 | INFO  | It takes a moment until task 752bea8a-37f6-4fb3-a307-969d1114b2a0 (wipe-partitions) has been started and output is visible here. 2025-09-17 00:37:12.154230 | orchestrator | 2025-09-17 00:37:12.154391 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-17 00:37:12.154410 | orchestrator | 2025-09-17 00:37:12.154421 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-17 00:37:12.154449 | orchestrator | Wednesday 17 September 2025 00:37:03 +0000 (0:00:00.133) 0:00:00.133 *** 2025-09-17 00:37:12.154521 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:37:12.154536 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:37:12.154548 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:37:12.154559 | orchestrator | 2025-09-17 00:37:12.154570 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-17 00:37:12.154619 | orchestrator | Wednesday 17 September 2025 00:37:04 +0000 (0:00:00.670) 0:00:00.804 *** 2025-09-17 00:37:12.154631 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:12.154642 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:37:12.154658 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:37:12.154669 | orchestrator | 2025-09-17 00:37:12.154680 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-17 00:37:12.154691 | orchestrator | Wednesday 17 September 2025 00:37:04 +0000 (0:00:00.261) 0:00:01.065 *** 2025-09-17 00:37:12.154702 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:37:12.154714 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:37:12.154724 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:37:12.154735 | orchestrator | 2025-09-17 00:37:12.154746 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-17 00:37:12.154757 | orchestrator | Wednesday 17 September 2025 00:37:05 +0000 (0:00:00.808) 0:00:01.873 *** 2025-09-17 00:37:12.154768 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:12.154779 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:37:12.154790 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:37:12.154800 | orchestrator | 2025-09-17 00:37:12.154811 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-17 00:37:12.154822 | orchestrator | Wednesday 17 September 2025 00:37:05 +0000 (0:00:00.233) 0:00:02.107 *** 2025-09-17 00:37:12.154833 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-17 00:37:12.154879 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-17 00:37:12.154891 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-17 00:37:12.154901 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-17 00:37:12.154912 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-17 00:37:12.154922 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-17 00:37:12.154933 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-17 00:37:12.154944 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-17 00:37:12.154954 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-17 00:37:12.154965 | orchestrator | 2025-09-17 00:37:12.154976 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-17 00:37:12.154987 | orchestrator | Wednesday 17 September 2025 00:37:06 +0000 (0:00:01.328) 0:00:03.435 *** 2025-09-17 00:37:12.154998 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-17 00:37:12.155009 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-17 00:37:12.155019 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-17 00:37:12.155030 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-17 00:37:12.155040 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-17 00:37:12.155051 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-17 00:37:12.155062 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-17 00:37:12.155072 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-17 00:37:12.155083 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-17 00:37:12.155093 | orchestrator | 2025-09-17 00:37:12.155104 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-17 00:37:12.155114 | orchestrator | Wednesday 17 September 2025 00:37:08 +0000 (0:00:01.396) 0:00:04.831 *** 2025-09-17 00:37:12.155125 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-17 00:37:12.155135 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-17 00:37:12.155146 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-17 00:37:12.155156 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-17 00:37:12.155167 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-17 00:37:12.155188 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-17 00:37:12.155199 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-17 00:37:12.155219 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-17 00:37:12.155230 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-17 00:37:12.155240 | orchestrator | 2025-09-17 00:37:12.155251 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-17 00:37:12.155262 | orchestrator | Wednesday 17 September 2025 00:37:10 +0000 (0:00:02.459) 0:00:07.291 *** 2025-09-17 00:37:12.155273 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:37:12.155283 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:37:12.155294 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:37:12.155304 | orchestrator | 2025-09-17 00:37:12.155315 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-17 00:37:12.155326 | orchestrator | Wednesday 17 September 2025 00:37:11 +0000 (0:00:00.598) 0:00:07.889 *** 2025-09-17 00:37:12.155337 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:37:12.155347 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:37:12.155358 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:37:12.155369 | orchestrator | 2025-09-17 00:37:12.155379 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:37:12.155392 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:37:12.155405 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:37:12.155437 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:37:12.155449 | orchestrator | 2025-09-17 00:37:12.155460 | orchestrator | 2025-09-17 00:37:12.155471 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:37:12.155482 | orchestrator | Wednesday 17 September 2025 00:37:11 +0000 (0:00:00.631) 0:00:08.521 *** 2025-09-17 00:37:12.155492 | orchestrator | =============================================================================== 2025-09-17 00:37:12.155503 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.46s 2025-09-17 00:37:12.155514 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.40s 2025-09-17 00:37:12.155524 | orchestrator | Check device availability ----------------------------------------------- 1.33s 2025-09-17 00:37:12.155535 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.81s 2025-09-17 00:37:12.155546 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.67s 2025-09-17 00:37:12.155557 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-09-17 00:37:12.155567 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2025-09-17 00:37:12.155578 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-09-17 00:37:12.155589 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-09-17 00:37:24.471011 | orchestrator | 2025-09-17 00:37:24 | INFO  | Task 88b15b2a-88b9-4bbb-a6f6-751e212120a1 (facts) was prepared for execution. 2025-09-17 00:37:24.471155 | orchestrator | 2025-09-17 00:37:24 | INFO  | It takes a moment until task 88b15b2a-88b9-4bbb-a6f6-751e212120a1 (facts) has been started and output is visible here. 2025-09-17 00:37:36.393708 | orchestrator | 2025-09-17 00:37:36.393824 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-17 00:37:36.393898 | orchestrator | 2025-09-17 00:37:36.393914 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-17 00:37:36.393926 | orchestrator | Wednesday 17 September 2025 00:37:28 +0000 (0:00:00.277) 0:00:00.277 *** 2025-09-17 00:37:36.393937 | orchestrator | ok: [testbed-manager] 2025-09-17 00:37:36.393949 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:37:36.393959 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:37:36.394086 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:37:36.394101 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:37:36.394111 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:37:36.394122 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:37:36.394132 | orchestrator | 2025-09-17 00:37:36.394146 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-17 00:37:36.394157 | orchestrator | Wednesday 17 September 2025 00:37:29 +0000 (0:00:01.078) 0:00:01.355 *** 2025-09-17 00:37:36.394168 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:37:36.394179 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:37:36.394190 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:37:36.394200 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:37:36.394211 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:36.394221 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:37:36.394232 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:37:36.394242 | orchestrator | 2025-09-17 00:37:36.394253 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 00:37:36.394265 | orchestrator | 2025-09-17 00:37:36.394277 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 00:37:36.394290 | orchestrator | Wednesday 17 September 2025 00:37:30 +0000 (0:00:01.218) 0:00:02.574 *** 2025-09-17 00:37:36.394302 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:37:36.394314 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:37:36.394327 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:37:36.394339 | orchestrator | ok: [testbed-manager] 2025-09-17 00:37:36.394351 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:37:36.394363 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:37:36.394374 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:37:36.394386 | orchestrator | 2025-09-17 00:37:36.394398 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-17 00:37:36.394410 | orchestrator | 2025-09-17 00:37:36.394421 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-17 00:37:36.394448 | orchestrator | Wednesday 17 September 2025 00:37:35 +0000 (0:00:04.861) 0:00:07.435 *** 2025-09-17 00:37:36.394462 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:37:36.394474 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:37:36.394487 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:37:36.394499 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:37:36.394510 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:36.394522 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:37:36.394534 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:37:36.394546 | orchestrator | 2025-09-17 00:37:36.394558 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:37:36.394571 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:37:36.394584 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:37:36.394597 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:37:36.394609 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:37:36.394620 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:37:36.394630 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:37:36.394641 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:37:36.394651 | orchestrator | 2025-09-17 00:37:36.394671 | orchestrator | 2025-09-17 00:37:36.394681 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:37:36.394692 | orchestrator | Wednesday 17 September 2025 00:37:36 +0000 (0:00:00.719) 0:00:08.155 *** 2025-09-17 00:37:36.394703 | orchestrator | =============================================================================== 2025-09-17 00:37:36.394713 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.86s 2025-09-17 00:37:36.394724 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-09-17 00:37:36.394734 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-09-17 00:37:36.394745 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-09-17 00:37:38.651406 | orchestrator | 2025-09-17 00:37:38 | INFO  | Task 1e77deb4-c63b-4921-91b0-d0f0486ff732 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-17 00:37:38.651558 | orchestrator | 2025-09-17 00:37:38 | INFO  | It takes a moment until task 1e77deb4-c63b-4921-91b0-d0f0486ff732 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-17 00:37:50.180011 | orchestrator | 2025-09-17 00:37:50.180128 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-17 00:37:50.180144 | orchestrator | 2025-09-17 00:37:50.180156 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 00:37:50.180170 | orchestrator | Wednesday 17 September 2025 00:37:42 +0000 (0:00:00.317) 0:00:00.317 *** 2025-09-17 00:37:50.180182 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 00:37:50.180193 | orchestrator | 2025-09-17 00:37:50.180204 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 00:37:50.180215 | orchestrator | Wednesday 17 September 2025 00:37:42 +0000 (0:00:00.248) 0:00:00.565 *** 2025-09-17 00:37:50.180226 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:37:50.180237 | orchestrator | 2025-09-17 00:37:50.180248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180259 | orchestrator | Wednesday 17 September 2025 00:37:43 +0000 (0:00:00.217) 0:00:00.783 *** 2025-09-17 00:37:50.180269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-17 00:37:50.180280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-17 00:37:50.180292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-17 00:37:50.180302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-17 00:37:50.180313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-17 00:37:50.180324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-17 00:37:50.180334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-17 00:37:50.180345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-17 00:37:50.180355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-17 00:37:50.180366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-17 00:37:50.180377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-17 00:37:50.180395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-17 00:37:50.180407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-17 00:37:50.180417 | orchestrator | 2025-09-17 00:37:50.180428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180438 | orchestrator | Wednesday 17 September 2025 00:37:43 +0000 (0:00:00.359) 0:00:01.142 *** 2025-09-17 00:37:50.180449 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.180481 | orchestrator | 2025-09-17 00:37:50.180494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180507 | orchestrator | Wednesday 17 September 2025 00:37:44 +0000 (0:00:00.441) 0:00:01.584 *** 2025-09-17 00:37:50.180520 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.180532 | orchestrator | 2025-09-17 00:37:50.180544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180557 | orchestrator | Wednesday 17 September 2025 00:37:44 +0000 (0:00:00.208) 0:00:01.792 *** 2025-09-17 00:37:50.180569 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.180580 | orchestrator | 2025-09-17 00:37:50.180592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180604 | orchestrator | Wednesday 17 September 2025 00:37:44 +0000 (0:00:00.202) 0:00:01.994 *** 2025-09-17 00:37:50.180616 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.180633 | orchestrator | 2025-09-17 00:37:50.180646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180658 | orchestrator | Wednesday 17 September 2025 00:37:44 +0000 (0:00:00.209) 0:00:02.204 *** 2025-09-17 00:37:50.180670 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.180680 | orchestrator | 2025-09-17 00:37:50.180691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180702 | orchestrator | Wednesday 17 September 2025 00:37:44 +0000 (0:00:00.193) 0:00:02.397 *** 2025-09-17 00:37:50.180713 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.180723 | orchestrator | 2025-09-17 00:37:50.180734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180744 | orchestrator | Wednesday 17 September 2025 00:37:44 +0000 (0:00:00.182) 0:00:02.580 *** 2025-09-17 00:37:50.180755 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.180765 | orchestrator | 2025-09-17 00:37:50.180776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180787 | orchestrator | Wednesday 17 September 2025 00:37:45 +0000 (0:00:00.214) 0:00:02.795 *** 2025-09-17 00:37:50.180798 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.180808 | orchestrator | 2025-09-17 00:37:50.180819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180829 | orchestrator | Wednesday 17 September 2025 00:37:45 +0000 (0:00:00.202) 0:00:02.998 *** 2025-09-17 00:37:50.180840 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130) 2025-09-17 00:37:50.180875 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130) 2025-09-17 00:37:50.180886 | orchestrator | 2025-09-17 00:37:50.180896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180907 | orchestrator | Wednesday 17 September 2025 00:37:45 +0000 (0:00:00.430) 0:00:03.429 *** 2025-09-17 00:37:50.180934 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f) 2025-09-17 00:37:50.180945 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f) 2025-09-17 00:37:50.180956 | orchestrator | 2025-09-17 00:37:50.180966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.180977 | orchestrator | Wednesday 17 September 2025 00:37:46 +0000 (0:00:00.430) 0:00:03.859 *** 2025-09-17 00:37:50.180988 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb) 2025-09-17 00:37:50.180998 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb) 2025-09-17 00:37:50.181009 | orchestrator | 2025-09-17 00:37:50.181020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.181031 | orchestrator | Wednesday 17 September 2025 00:37:46 +0000 (0:00:00.600) 0:00:04.460 *** 2025-09-17 00:37:50.181041 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a) 2025-09-17 00:37:50.181060 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a) 2025-09-17 00:37:50.181070 | orchestrator | 2025-09-17 00:37:50.181081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:50.181092 | orchestrator | Wednesday 17 September 2025 00:37:47 +0000 (0:00:00.623) 0:00:05.084 *** 2025-09-17 00:37:50.181102 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 00:37:50.181113 | orchestrator | 2025-09-17 00:37:50.181123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:50.181140 | orchestrator | Wednesday 17 September 2025 00:37:48 +0000 (0:00:00.700) 0:00:05.784 *** 2025-09-17 00:37:50.181151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-17 00:37:50.181161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-17 00:37:50.181172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-17 00:37:50.181182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-17 00:37:50.181193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-17 00:37:50.181204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-17 00:37:50.181214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-17 00:37:50.181225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-17 00:37:50.181235 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-17 00:37:50.181246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-17 00:37:50.181256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-17 00:37:50.181267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-17 00:37:50.181277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-17 00:37:50.181288 | orchestrator | 2025-09-17 00:37:50.181298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:50.181309 | orchestrator | Wednesday 17 September 2025 00:37:48 +0000 (0:00:00.382) 0:00:06.166 *** 2025-09-17 00:37:50.181320 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.181330 | orchestrator | 2025-09-17 00:37:50.181341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:50.181351 | orchestrator | Wednesday 17 September 2025 00:37:48 +0000 (0:00:00.203) 0:00:06.369 *** 2025-09-17 00:37:50.181362 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.181372 | orchestrator | 2025-09-17 00:37:50.181383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:50.181394 | orchestrator | Wednesday 17 September 2025 00:37:48 +0000 (0:00:00.196) 0:00:06.565 *** 2025-09-17 00:37:50.181404 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.181415 | orchestrator | 2025-09-17 00:37:50.181425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:50.181436 | orchestrator | Wednesday 17 September 2025 00:37:49 +0000 (0:00:00.196) 0:00:06.762 *** 2025-09-17 00:37:50.181446 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.181457 | orchestrator | 2025-09-17 00:37:50.181468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:50.181478 | orchestrator | Wednesday 17 September 2025 00:37:49 +0000 (0:00:00.193) 0:00:06.955 *** 2025-09-17 00:37:50.181489 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.181499 | orchestrator | 2025-09-17 00:37:50.181516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:50.181527 | orchestrator | Wednesday 17 September 2025 00:37:49 +0000 (0:00:00.188) 0:00:07.144 *** 2025-09-17 00:37:50.181538 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.181548 | orchestrator | 2025-09-17 00:37:50.181559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:50.181569 | orchestrator | Wednesday 17 September 2025 00:37:49 +0000 (0:00:00.190) 0:00:07.335 *** 2025-09-17 00:37:50.181580 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:50.181590 | orchestrator | 2025-09-17 00:37:50.181601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:50.181612 | orchestrator | Wednesday 17 September 2025 00:37:49 +0000 (0:00:00.190) 0:00:07.525 *** 2025-09-17 00:37:50.181628 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.786547 | orchestrator | 2025-09-17 00:37:57.786660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:57.786679 | orchestrator | Wednesday 17 September 2025 00:37:50 +0000 (0:00:00.227) 0:00:07.752 *** 2025-09-17 00:37:57.786691 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-17 00:37:57.786704 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-17 00:37:57.786715 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-17 00:37:57.786726 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-17 00:37:57.786737 | orchestrator | 2025-09-17 00:37:57.786749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:57.786759 | orchestrator | Wednesday 17 September 2025 00:37:51 +0000 (0:00:01.050) 0:00:08.802 *** 2025-09-17 00:37:57.786770 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.786781 | orchestrator | 2025-09-17 00:37:57.786792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:57.786803 | orchestrator | Wednesday 17 September 2025 00:37:51 +0000 (0:00:00.197) 0:00:09.000 *** 2025-09-17 00:37:57.786813 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.786824 | orchestrator | 2025-09-17 00:37:57.786835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:57.786901 | orchestrator | Wednesday 17 September 2025 00:37:51 +0000 (0:00:00.195) 0:00:09.196 *** 2025-09-17 00:37:57.786913 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.786924 | orchestrator | 2025-09-17 00:37:57.786934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:37:57.786945 | orchestrator | Wednesday 17 September 2025 00:37:51 +0000 (0:00:00.208) 0:00:09.404 *** 2025-09-17 00:37:57.786956 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.786966 | orchestrator | 2025-09-17 00:37:57.786977 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-17 00:37:57.786988 | orchestrator | Wednesday 17 September 2025 00:37:52 +0000 (0:00:00.228) 0:00:09.633 *** 2025-09-17 00:37:57.786999 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-17 00:37:57.787010 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-17 00:37:57.787020 | orchestrator | 2025-09-17 00:37:57.787031 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-17 00:37:57.787042 | orchestrator | Wednesday 17 September 2025 00:37:52 +0000 (0:00:00.172) 0:00:09.806 *** 2025-09-17 00:37:57.787074 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.787088 | orchestrator | 2025-09-17 00:37:57.787100 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-17 00:37:57.787113 | orchestrator | Wednesday 17 September 2025 00:37:52 +0000 (0:00:00.141) 0:00:09.947 *** 2025-09-17 00:37:57.787125 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.787137 | orchestrator | 2025-09-17 00:37:57.787149 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-17 00:37:57.787161 | orchestrator | Wednesday 17 September 2025 00:37:52 +0000 (0:00:00.140) 0:00:10.088 *** 2025-09-17 00:37:57.787174 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.787208 | orchestrator | 2025-09-17 00:37:57.787222 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-17 00:37:57.787234 | orchestrator | Wednesday 17 September 2025 00:37:52 +0000 (0:00:00.158) 0:00:10.246 *** 2025-09-17 00:37:57.787246 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:37:57.787258 | orchestrator | 2025-09-17 00:37:57.787270 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-17 00:37:57.787283 | orchestrator | Wednesday 17 September 2025 00:37:52 +0000 (0:00:00.138) 0:00:10.385 *** 2025-09-17 00:37:57.787295 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'}}) 2025-09-17 00:37:57.787308 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'}}) 2025-09-17 00:37:57.787320 | orchestrator | 2025-09-17 00:37:57.787332 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-17 00:37:57.787344 | orchestrator | Wednesday 17 September 2025 00:37:52 +0000 (0:00:00.162) 0:00:10.547 *** 2025-09-17 00:37:57.787357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'}})  2025-09-17 00:37:57.787377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'}})  2025-09-17 00:37:57.787389 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.787402 | orchestrator | 2025-09-17 00:37:57.787414 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-17 00:37:57.787427 | orchestrator | Wednesday 17 September 2025 00:37:53 +0000 (0:00:00.152) 0:00:10.699 *** 2025-09-17 00:37:57.787439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'}})  2025-09-17 00:37:57.787450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'}})  2025-09-17 00:37:57.787461 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.787472 | orchestrator | 2025-09-17 00:37:57.787483 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-17 00:37:57.787493 | orchestrator | Wednesday 17 September 2025 00:37:53 +0000 (0:00:00.425) 0:00:11.125 *** 2025-09-17 00:37:57.787504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'}})  2025-09-17 00:37:57.787515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'}})  2025-09-17 00:37:57.787526 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.787536 | orchestrator | 2025-09-17 00:37:57.787563 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-17 00:37:57.787574 | orchestrator | Wednesday 17 September 2025 00:37:53 +0000 (0:00:00.155) 0:00:11.281 *** 2025-09-17 00:37:57.787585 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:37:57.787595 | orchestrator | 2025-09-17 00:37:57.787612 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-17 00:37:57.787623 | orchestrator | Wednesday 17 September 2025 00:37:53 +0000 (0:00:00.152) 0:00:11.433 *** 2025-09-17 00:37:57.787634 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:37:57.787644 | orchestrator | 2025-09-17 00:37:57.787655 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-17 00:37:57.787666 | orchestrator | Wednesday 17 September 2025 00:37:54 +0000 (0:00:00.152) 0:00:11.586 *** 2025-09-17 00:37:57.787676 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.787687 | orchestrator | 2025-09-17 00:37:57.787697 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-17 00:37:57.787708 | orchestrator | Wednesday 17 September 2025 00:37:54 +0000 (0:00:00.146) 0:00:11.732 *** 2025-09-17 00:37:57.787718 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.787729 | orchestrator | 2025-09-17 00:37:57.787747 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-17 00:37:57.787758 | orchestrator | Wednesday 17 September 2025 00:37:54 +0000 (0:00:00.142) 0:00:11.874 *** 2025-09-17 00:37:57.787769 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.787779 | orchestrator | 2025-09-17 00:37:57.787790 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-17 00:37:57.787801 | orchestrator | Wednesday 17 September 2025 00:37:54 +0000 (0:00:00.151) 0:00:12.026 *** 2025-09-17 00:37:57.787811 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 00:37:57.787822 | orchestrator |  "ceph_osd_devices": { 2025-09-17 00:37:57.787833 | orchestrator |  "sdb": { 2025-09-17 00:37:57.787844 | orchestrator |  "osd_lvm_uuid": "3f2c044b-dfa5-5506-ae92-c5b86c73e5ac" 2025-09-17 00:37:57.787876 | orchestrator |  }, 2025-09-17 00:37:57.787887 | orchestrator |  "sdc": { 2025-09-17 00:37:57.787898 | orchestrator |  "osd_lvm_uuid": "fe66c6e3-4f85-5e6e-b974-d8af1fb98b15" 2025-09-17 00:37:57.787909 | orchestrator |  } 2025-09-17 00:37:57.787919 | orchestrator |  } 2025-09-17 00:37:57.787931 | orchestrator | } 2025-09-17 00:37:57.787942 | orchestrator | 2025-09-17 00:37:57.787953 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-17 00:37:57.787963 | orchestrator | Wednesday 17 September 2025 00:37:54 +0000 (0:00:00.135) 0:00:12.161 *** 2025-09-17 00:37:57.787974 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.787984 | orchestrator | 2025-09-17 00:37:57.787995 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-17 00:37:57.788005 | orchestrator | Wednesday 17 September 2025 00:37:54 +0000 (0:00:00.139) 0:00:12.301 *** 2025-09-17 00:37:57.788016 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.788026 | orchestrator | 2025-09-17 00:37:57.788037 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-17 00:37:57.788048 | orchestrator | Wednesday 17 September 2025 00:37:54 +0000 (0:00:00.132) 0:00:12.433 *** 2025-09-17 00:37:57.788058 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:37:57.788069 | orchestrator | 2025-09-17 00:37:57.788079 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-17 00:37:57.788090 | orchestrator | Wednesday 17 September 2025 00:37:55 +0000 (0:00:00.152) 0:00:12.586 *** 2025-09-17 00:37:57.788100 | orchestrator | changed: [testbed-node-3] => { 2025-09-17 00:37:57.788111 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-17 00:37:57.788122 | orchestrator |  "ceph_osd_devices": { 2025-09-17 00:37:57.788132 | orchestrator |  "sdb": { 2025-09-17 00:37:57.788143 | orchestrator |  "osd_lvm_uuid": "3f2c044b-dfa5-5506-ae92-c5b86c73e5ac" 2025-09-17 00:37:57.788153 | orchestrator |  }, 2025-09-17 00:37:57.788164 | orchestrator |  "sdc": { 2025-09-17 00:37:57.788175 | orchestrator |  "osd_lvm_uuid": "fe66c6e3-4f85-5e6e-b974-d8af1fb98b15" 2025-09-17 00:37:57.788185 | orchestrator |  } 2025-09-17 00:37:57.788196 | orchestrator |  }, 2025-09-17 00:37:57.788206 | orchestrator |  "lvm_volumes": [ 2025-09-17 00:37:57.788217 | orchestrator |  { 2025-09-17 00:37:57.788227 | orchestrator |  "data": "osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac", 2025-09-17 00:37:57.788238 | orchestrator |  "data_vg": "ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac" 2025-09-17 00:37:57.788249 | orchestrator |  }, 2025-09-17 00:37:57.788259 | orchestrator |  { 2025-09-17 00:37:57.788270 | orchestrator |  "data": "osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15", 2025-09-17 00:37:57.788281 | orchestrator |  "data_vg": "ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15" 2025-09-17 00:37:57.788291 | orchestrator |  } 2025-09-17 00:37:57.788302 | orchestrator |  ] 2025-09-17 00:37:57.788312 | orchestrator |  } 2025-09-17 00:37:57.788323 | orchestrator | } 2025-09-17 00:37:57.788334 | orchestrator | 2025-09-17 00:37:57.788350 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-17 00:37:57.788368 | orchestrator | Wednesday 17 September 2025 00:37:55 +0000 (0:00:00.201) 0:00:12.787 *** 2025-09-17 00:37:57.788378 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 00:37:57.788389 | orchestrator | 2025-09-17 00:37:57.788400 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-17 00:37:57.788411 | orchestrator | 2025-09-17 00:37:57.788421 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 00:37:57.788431 | orchestrator | Wednesday 17 September 2025 00:37:57 +0000 (0:00:02.075) 0:00:14.863 *** 2025-09-17 00:37:57.788442 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-17 00:37:57.788452 | orchestrator | 2025-09-17 00:37:57.788463 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 00:37:57.788473 | orchestrator | Wednesday 17 September 2025 00:37:57 +0000 (0:00:00.255) 0:00:15.119 *** 2025-09-17 00:37:57.788484 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:37:57.788494 | orchestrator | 2025-09-17 00:37:57.788505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:37:57.788522 | orchestrator | Wednesday 17 September 2025 00:37:57 +0000 (0:00:00.239) 0:00:15.359 *** 2025-09-17 00:38:05.546451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-17 00:38:05.546559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-17 00:38:05.546576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-17 00:38:05.546588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-17 00:38:05.546599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-17 00:38:05.546610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-17 00:38:05.546621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-17 00:38:05.546632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-17 00:38:05.546643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-17 00:38:05.546653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-17 00:38:05.546664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-17 00:38:05.546675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-17 00:38:05.546686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-17 00:38:05.546703 | orchestrator | 2025-09-17 00:38:05.546715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.546727 | orchestrator | Wednesday 17 September 2025 00:37:58 +0000 (0:00:00.367) 0:00:15.727 *** 2025-09-17 00:38:05.546739 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.546751 | orchestrator | 2025-09-17 00:38:05.546762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.546773 | orchestrator | Wednesday 17 September 2025 00:37:58 +0000 (0:00:00.200) 0:00:15.928 *** 2025-09-17 00:38:05.546784 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.546795 | orchestrator | 2025-09-17 00:38:05.546806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.546817 | orchestrator | Wednesday 17 September 2025 00:37:58 +0000 (0:00:00.200) 0:00:16.129 *** 2025-09-17 00:38:05.546827 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.546838 | orchestrator | 2025-09-17 00:38:05.546895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.546984 | orchestrator | Wednesday 17 September 2025 00:37:58 +0000 (0:00:00.183) 0:00:16.312 *** 2025-09-17 00:38:05.547048 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.547087 | orchestrator | 2025-09-17 00:38:05.547101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.547114 | orchestrator | Wednesday 17 September 2025 00:37:58 +0000 (0:00:00.203) 0:00:16.516 *** 2025-09-17 00:38:05.547126 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.547139 | orchestrator | 2025-09-17 00:38:05.547152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.547165 | orchestrator | Wednesday 17 September 2025 00:37:59 +0000 (0:00:00.562) 0:00:17.078 *** 2025-09-17 00:38:05.547177 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.547189 | orchestrator | 2025-09-17 00:38:05.547202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.547215 | orchestrator | Wednesday 17 September 2025 00:37:59 +0000 (0:00:00.193) 0:00:17.272 *** 2025-09-17 00:38:05.547253 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.547266 | orchestrator | 2025-09-17 00:38:05.547279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.547292 | orchestrator | Wednesday 17 September 2025 00:37:59 +0000 (0:00:00.204) 0:00:17.476 *** 2025-09-17 00:38:05.547304 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.547316 | orchestrator | 2025-09-17 00:38:05.547329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.547341 | orchestrator | Wednesday 17 September 2025 00:38:00 +0000 (0:00:00.194) 0:00:17.670 *** 2025-09-17 00:38:05.547353 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e) 2025-09-17 00:38:05.547365 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e) 2025-09-17 00:38:05.547375 | orchestrator | 2025-09-17 00:38:05.547386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.547397 | orchestrator | Wednesday 17 September 2025 00:38:00 +0000 (0:00:00.448) 0:00:18.118 *** 2025-09-17 00:38:05.547408 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4) 2025-09-17 00:38:05.547419 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4) 2025-09-17 00:38:05.547429 | orchestrator | 2025-09-17 00:38:05.547440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.547451 | orchestrator | Wednesday 17 September 2025 00:38:00 +0000 (0:00:00.401) 0:00:18.520 *** 2025-09-17 00:38:05.547462 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d) 2025-09-17 00:38:05.547473 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d) 2025-09-17 00:38:05.547483 | orchestrator | 2025-09-17 00:38:05.547494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.547505 | orchestrator | Wednesday 17 September 2025 00:38:01 +0000 (0:00:00.440) 0:00:18.960 *** 2025-09-17 00:38:05.547533 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690) 2025-09-17 00:38:05.547545 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690) 2025-09-17 00:38:05.547556 | orchestrator | 2025-09-17 00:38:05.547567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:05.547578 | orchestrator | Wednesday 17 September 2025 00:38:01 +0000 (0:00:00.422) 0:00:19.383 *** 2025-09-17 00:38:05.547588 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 00:38:05.547599 | orchestrator | 2025-09-17 00:38:05.547610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.547621 | orchestrator | Wednesday 17 September 2025 00:38:02 +0000 (0:00:00.357) 0:00:19.741 *** 2025-09-17 00:38:05.547631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-17 00:38:05.547657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-17 00:38:05.547676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-17 00:38:05.547694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-17 00:38:05.547711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-17 00:38:05.547728 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-17 00:38:05.547745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-17 00:38:05.547762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-17 00:38:05.547780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-17 00:38:05.547796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-17 00:38:05.547812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-17 00:38:05.547829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-17 00:38:05.547880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-17 00:38:05.547899 | orchestrator | 2025-09-17 00:38:05.547917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.547935 | orchestrator | Wednesday 17 September 2025 00:38:02 +0000 (0:00:00.414) 0:00:20.155 *** 2025-09-17 00:38:05.547953 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.547971 | orchestrator | 2025-09-17 00:38:05.547989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.548008 | orchestrator | Wednesday 17 September 2025 00:38:02 +0000 (0:00:00.206) 0:00:20.361 *** 2025-09-17 00:38:05.548026 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.548044 | orchestrator | 2025-09-17 00:38:05.548065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.548076 | orchestrator | Wednesday 17 September 2025 00:38:03 +0000 (0:00:00.599) 0:00:20.960 *** 2025-09-17 00:38:05.548086 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.548097 | orchestrator | 2025-09-17 00:38:05.548107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.548118 | orchestrator | Wednesday 17 September 2025 00:38:03 +0000 (0:00:00.198) 0:00:21.159 *** 2025-09-17 00:38:05.548129 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.548139 | orchestrator | 2025-09-17 00:38:05.548150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.548161 | orchestrator | Wednesday 17 September 2025 00:38:03 +0000 (0:00:00.201) 0:00:21.361 *** 2025-09-17 00:38:05.548171 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.548181 | orchestrator | 2025-09-17 00:38:05.548192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.548203 | orchestrator | Wednesday 17 September 2025 00:38:03 +0000 (0:00:00.202) 0:00:21.563 *** 2025-09-17 00:38:05.548213 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.548224 | orchestrator | 2025-09-17 00:38:05.548234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.548245 | orchestrator | Wednesday 17 September 2025 00:38:04 +0000 (0:00:00.218) 0:00:21.782 *** 2025-09-17 00:38:05.548255 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.548265 | orchestrator | 2025-09-17 00:38:05.548276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.548286 | orchestrator | Wednesday 17 September 2025 00:38:04 +0000 (0:00:00.207) 0:00:21.989 *** 2025-09-17 00:38:05.548296 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.548307 | orchestrator | 2025-09-17 00:38:05.548317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.548338 | orchestrator | Wednesday 17 September 2025 00:38:04 +0000 (0:00:00.206) 0:00:22.196 *** 2025-09-17 00:38:05.548349 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-17 00:38:05.548361 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-17 00:38:05.548371 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-17 00:38:05.548382 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-17 00:38:05.548393 | orchestrator | 2025-09-17 00:38:05.548403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:05.548413 | orchestrator | Wednesday 17 September 2025 00:38:05 +0000 (0:00:00.695) 0:00:22.891 *** 2025-09-17 00:38:05.548424 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:05.548435 | orchestrator | 2025-09-17 00:38:05.548455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:11.918520 | orchestrator | Wednesday 17 September 2025 00:38:05 +0000 (0:00:00.229) 0:00:23.120 *** 2025-09-17 00:38:11.918635 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.918651 | orchestrator | 2025-09-17 00:38:11.918664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:11.918675 | orchestrator | Wednesday 17 September 2025 00:38:05 +0000 (0:00:00.204) 0:00:23.325 *** 2025-09-17 00:38:11.918686 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.918697 | orchestrator | 2025-09-17 00:38:11.918708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:11.918718 | orchestrator | Wednesday 17 September 2025 00:38:05 +0000 (0:00:00.192) 0:00:23.517 *** 2025-09-17 00:38:11.918729 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.918739 | orchestrator | 2025-09-17 00:38:11.918750 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-17 00:38:11.918761 | orchestrator | Wednesday 17 September 2025 00:38:06 +0000 (0:00:00.193) 0:00:23.710 *** 2025-09-17 00:38:11.918772 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-17 00:38:11.918782 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-17 00:38:11.918793 | orchestrator | 2025-09-17 00:38:11.918803 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-17 00:38:11.918814 | orchestrator | Wednesday 17 September 2025 00:38:06 +0000 (0:00:00.349) 0:00:24.060 *** 2025-09-17 00:38:11.918824 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.918835 | orchestrator | 2025-09-17 00:38:11.918885 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-17 00:38:11.918899 | orchestrator | Wednesday 17 September 2025 00:38:06 +0000 (0:00:00.126) 0:00:24.187 *** 2025-09-17 00:38:11.918909 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.918920 | orchestrator | 2025-09-17 00:38:11.918931 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-17 00:38:11.918942 | orchestrator | Wednesday 17 September 2025 00:38:06 +0000 (0:00:00.132) 0:00:24.319 *** 2025-09-17 00:38:11.918952 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.918963 | orchestrator | 2025-09-17 00:38:11.918973 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-17 00:38:11.918984 | orchestrator | Wednesday 17 September 2025 00:38:06 +0000 (0:00:00.123) 0:00:24.443 *** 2025-09-17 00:38:11.918995 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:38:11.919006 | orchestrator | 2025-09-17 00:38:11.919017 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-17 00:38:11.919027 | orchestrator | Wednesday 17 September 2025 00:38:06 +0000 (0:00:00.138) 0:00:24.581 *** 2025-09-17 00:38:11.919039 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'}}) 2025-09-17 00:38:11.919051 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd1158166-3610-5fc1-bd8e-5288705939fa'}}) 2025-09-17 00:38:11.919064 | orchestrator | 2025-09-17 00:38:11.919076 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-17 00:38:11.919112 | orchestrator | Wednesday 17 September 2025 00:38:07 +0000 (0:00:00.195) 0:00:24.777 *** 2025-09-17 00:38:11.919127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'}})  2025-09-17 00:38:11.919141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd1158166-3610-5fc1-bd8e-5288705939fa'}})  2025-09-17 00:38:11.919153 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.919165 | orchestrator | 2025-09-17 00:38:11.919194 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-17 00:38:11.919207 | orchestrator | Wednesday 17 September 2025 00:38:07 +0000 (0:00:00.157) 0:00:24.934 *** 2025-09-17 00:38:11.919220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'}})  2025-09-17 00:38:11.919232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd1158166-3610-5fc1-bd8e-5288705939fa'}})  2025-09-17 00:38:11.919245 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.919257 | orchestrator | 2025-09-17 00:38:11.919269 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-17 00:38:11.919282 | orchestrator | Wednesday 17 September 2025 00:38:07 +0000 (0:00:00.150) 0:00:25.085 *** 2025-09-17 00:38:11.919294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'}})  2025-09-17 00:38:11.919306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd1158166-3610-5fc1-bd8e-5288705939fa'}})  2025-09-17 00:38:11.919319 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.919331 | orchestrator | 2025-09-17 00:38:11.919343 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-17 00:38:11.919355 | orchestrator | Wednesday 17 September 2025 00:38:07 +0000 (0:00:00.154) 0:00:25.239 *** 2025-09-17 00:38:11.919368 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:38:11.919380 | orchestrator | 2025-09-17 00:38:11.919392 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-17 00:38:11.919405 | orchestrator | Wednesday 17 September 2025 00:38:07 +0000 (0:00:00.145) 0:00:25.385 *** 2025-09-17 00:38:11.919415 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:38:11.919426 | orchestrator | 2025-09-17 00:38:11.919436 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-17 00:38:11.919446 | orchestrator | Wednesday 17 September 2025 00:38:07 +0000 (0:00:00.149) 0:00:25.534 *** 2025-09-17 00:38:11.919457 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.919467 | orchestrator | 2025-09-17 00:38:11.919495 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-17 00:38:11.919507 | orchestrator | Wednesday 17 September 2025 00:38:08 +0000 (0:00:00.128) 0:00:25.662 *** 2025-09-17 00:38:11.919518 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.919528 | orchestrator | 2025-09-17 00:38:11.919539 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-17 00:38:11.919549 | orchestrator | Wednesday 17 September 2025 00:38:08 +0000 (0:00:00.380) 0:00:26.043 *** 2025-09-17 00:38:11.919560 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.919570 | orchestrator | 2025-09-17 00:38:11.919581 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-17 00:38:11.919592 | orchestrator | Wednesday 17 September 2025 00:38:08 +0000 (0:00:00.149) 0:00:26.193 *** 2025-09-17 00:38:11.919602 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 00:38:11.919613 | orchestrator |  "ceph_osd_devices": { 2025-09-17 00:38:11.919623 | orchestrator |  "sdb": { 2025-09-17 00:38:11.919634 | orchestrator |  "osd_lvm_uuid": "f65d6451-63aa-5ff6-99b4-c6c20cacdd2d" 2025-09-17 00:38:11.919645 | orchestrator |  }, 2025-09-17 00:38:11.919655 | orchestrator |  "sdc": { 2025-09-17 00:38:11.919674 | orchestrator |  "osd_lvm_uuid": "d1158166-3610-5fc1-bd8e-5288705939fa" 2025-09-17 00:38:11.919685 | orchestrator |  } 2025-09-17 00:38:11.919696 | orchestrator |  } 2025-09-17 00:38:11.919706 | orchestrator | } 2025-09-17 00:38:11.919717 | orchestrator | 2025-09-17 00:38:11.919728 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-17 00:38:11.919739 | orchestrator | Wednesday 17 September 2025 00:38:08 +0000 (0:00:00.149) 0:00:26.343 *** 2025-09-17 00:38:11.919749 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.919760 | orchestrator | 2025-09-17 00:38:11.919771 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-17 00:38:11.919781 | orchestrator | Wednesday 17 September 2025 00:38:08 +0000 (0:00:00.151) 0:00:26.495 *** 2025-09-17 00:38:11.919792 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.919802 | orchestrator | 2025-09-17 00:38:11.919812 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-17 00:38:11.919823 | orchestrator | Wednesday 17 September 2025 00:38:09 +0000 (0:00:00.140) 0:00:26.635 *** 2025-09-17 00:38:11.919833 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:38:11.919844 | orchestrator | 2025-09-17 00:38:11.919874 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-17 00:38:11.919884 | orchestrator | Wednesday 17 September 2025 00:38:09 +0000 (0:00:00.147) 0:00:26.783 *** 2025-09-17 00:38:11.919895 | orchestrator | changed: [testbed-node-4] => { 2025-09-17 00:38:11.919905 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-17 00:38:11.919916 | orchestrator |  "ceph_osd_devices": { 2025-09-17 00:38:11.919926 | orchestrator |  "sdb": { 2025-09-17 00:38:11.919937 | orchestrator |  "osd_lvm_uuid": "f65d6451-63aa-5ff6-99b4-c6c20cacdd2d" 2025-09-17 00:38:11.919948 | orchestrator |  }, 2025-09-17 00:38:11.919958 | orchestrator |  "sdc": { 2025-09-17 00:38:11.919969 | orchestrator |  "osd_lvm_uuid": "d1158166-3610-5fc1-bd8e-5288705939fa" 2025-09-17 00:38:11.919980 | orchestrator |  } 2025-09-17 00:38:11.919990 | orchestrator |  }, 2025-09-17 00:38:11.920001 | orchestrator |  "lvm_volumes": [ 2025-09-17 00:38:11.920011 | orchestrator |  { 2025-09-17 00:38:11.920022 | orchestrator |  "data": "osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d", 2025-09-17 00:38:11.920032 | orchestrator |  "data_vg": "ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d" 2025-09-17 00:38:11.920043 | orchestrator |  }, 2025-09-17 00:38:11.920053 | orchestrator |  { 2025-09-17 00:38:11.920063 | orchestrator |  "data": "osd-block-d1158166-3610-5fc1-bd8e-5288705939fa", 2025-09-17 00:38:11.920074 | orchestrator |  "data_vg": "ceph-d1158166-3610-5fc1-bd8e-5288705939fa" 2025-09-17 00:38:11.920084 | orchestrator |  } 2025-09-17 00:38:11.920095 | orchestrator |  ] 2025-09-17 00:38:11.920105 | orchestrator |  } 2025-09-17 00:38:11.920116 | orchestrator | } 2025-09-17 00:38:11.920126 | orchestrator | 2025-09-17 00:38:11.920137 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-17 00:38:11.920147 | orchestrator | Wednesday 17 September 2025 00:38:09 +0000 (0:00:00.224) 0:00:27.008 *** 2025-09-17 00:38:11.920158 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-17 00:38:11.920168 | orchestrator | 2025-09-17 00:38:11.920178 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-17 00:38:11.920189 | orchestrator | 2025-09-17 00:38:11.920199 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 00:38:11.920210 | orchestrator | Wednesday 17 September 2025 00:38:10 +0000 (0:00:01.086) 0:00:28.094 *** 2025-09-17 00:38:11.920220 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-17 00:38:11.920231 | orchestrator | 2025-09-17 00:38:11.920241 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 00:38:11.920251 | orchestrator | Wednesday 17 September 2025 00:38:10 +0000 (0:00:00.473) 0:00:28.568 *** 2025-09-17 00:38:11.920269 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:38:11.920280 | orchestrator | 2025-09-17 00:38:11.920296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:11.920307 | orchestrator | Wednesday 17 September 2025 00:38:11 +0000 (0:00:00.574) 0:00:29.143 *** 2025-09-17 00:38:11.920318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-17 00:38:11.920328 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-17 00:38:11.920339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-17 00:38:11.920349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-17 00:38:11.920360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-17 00:38:11.920370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-17 00:38:11.920387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-17 00:38:18.771010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-17 00:38:18.771127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-17 00:38:18.771143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-17 00:38:18.771155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-17 00:38:18.771166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-17 00:38:18.771177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-17 00:38:18.771188 | orchestrator | 2025-09-17 00:38:18.771200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771212 | orchestrator | Wednesday 17 September 2025 00:38:11 +0000 (0:00:00.351) 0:00:29.494 *** 2025-09-17 00:38:18.771223 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.771234 | orchestrator | 2025-09-17 00:38:18.771245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771256 | orchestrator | Wednesday 17 September 2025 00:38:12 +0000 (0:00:00.188) 0:00:29.683 *** 2025-09-17 00:38:18.771266 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.771277 | orchestrator | 2025-09-17 00:38:18.771288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771298 | orchestrator | Wednesday 17 September 2025 00:38:12 +0000 (0:00:00.177) 0:00:29.860 *** 2025-09-17 00:38:18.771309 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.771320 | orchestrator | 2025-09-17 00:38:18.771330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771341 | orchestrator | Wednesday 17 September 2025 00:38:12 +0000 (0:00:00.196) 0:00:30.057 *** 2025-09-17 00:38:18.771351 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.771362 | orchestrator | 2025-09-17 00:38:18.771373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771383 | orchestrator | Wednesday 17 September 2025 00:38:12 +0000 (0:00:00.171) 0:00:30.228 *** 2025-09-17 00:38:18.771394 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.771404 | orchestrator | 2025-09-17 00:38:18.771415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771425 | orchestrator | Wednesday 17 September 2025 00:38:12 +0000 (0:00:00.149) 0:00:30.377 *** 2025-09-17 00:38:18.771436 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.771447 | orchestrator | 2025-09-17 00:38:18.771457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771468 | orchestrator | Wednesday 17 September 2025 00:38:12 +0000 (0:00:00.173) 0:00:30.551 *** 2025-09-17 00:38:18.771479 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.771513 | orchestrator | 2025-09-17 00:38:18.771525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771536 | orchestrator | Wednesday 17 September 2025 00:38:13 +0000 (0:00:00.171) 0:00:30.722 *** 2025-09-17 00:38:18.771546 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.771557 | orchestrator | 2025-09-17 00:38:18.771567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771578 | orchestrator | Wednesday 17 September 2025 00:38:13 +0000 (0:00:00.143) 0:00:30.866 *** 2025-09-17 00:38:18.771589 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571) 2025-09-17 00:38:18.771601 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571) 2025-09-17 00:38:18.771611 | orchestrator | 2025-09-17 00:38:18.771622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771632 | orchestrator | Wednesday 17 September 2025 00:38:13 +0000 (0:00:00.473) 0:00:31.339 *** 2025-09-17 00:38:18.771643 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9) 2025-09-17 00:38:18.771653 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9) 2025-09-17 00:38:18.771664 | orchestrator | 2025-09-17 00:38:18.771675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771685 | orchestrator | Wednesday 17 September 2025 00:38:14 +0000 (0:00:00.670) 0:00:32.009 *** 2025-09-17 00:38:18.771696 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e) 2025-09-17 00:38:18.771707 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e) 2025-09-17 00:38:18.771717 | orchestrator | 2025-09-17 00:38:18.771728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771738 | orchestrator | Wednesday 17 September 2025 00:38:14 +0000 (0:00:00.315) 0:00:32.325 *** 2025-09-17 00:38:18.771749 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7) 2025-09-17 00:38:18.771759 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7) 2025-09-17 00:38:18.771770 | orchestrator | 2025-09-17 00:38:18.771780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:38:18.771791 | orchestrator | Wednesday 17 September 2025 00:38:15 +0000 (0:00:00.416) 0:00:32.742 *** 2025-09-17 00:38:18.771801 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 00:38:18.771811 | orchestrator | 2025-09-17 00:38:18.771822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.771832 | orchestrator | Wednesday 17 September 2025 00:38:15 +0000 (0:00:00.310) 0:00:33.052 *** 2025-09-17 00:38:18.771879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-17 00:38:18.771891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-17 00:38:18.771902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-17 00:38:18.771912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-17 00:38:18.771923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-17 00:38:18.771933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-17 00:38:18.771961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-17 00:38:18.771973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-17 00:38:18.771984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-17 00:38:18.772004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-17 00:38:18.772015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-17 00:38:18.772026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-17 00:38:18.772037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-17 00:38:18.772047 | orchestrator | 2025-09-17 00:38:18.772057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772068 | orchestrator | Wednesday 17 September 2025 00:38:15 +0000 (0:00:00.301) 0:00:33.354 *** 2025-09-17 00:38:18.772079 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772089 | orchestrator | 2025-09-17 00:38:18.772100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772111 | orchestrator | Wednesday 17 September 2025 00:38:15 +0000 (0:00:00.152) 0:00:33.506 *** 2025-09-17 00:38:18.772121 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772132 | orchestrator | 2025-09-17 00:38:18.772142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772153 | orchestrator | Wednesday 17 September 2025 00:38:16 +0000 (0:00:00.193) 0:00:33.700 *** 2025-09-17 00:38:18.772163 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772174 | orchestrator | 2025-09-17 00:38:18.772189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772200 | orchestrator | Wednesday 17 September 2025 00:38:16 +0000 (0:00:00.185) 0:00:33.886 *** 2025-09-17 00:38:18.772211 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772221 | orchestrator | 2025-09-17 00:38:18.772232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772242 | orchestrator | Wednesday 17 September 2025 00:38:16 +0000 (0:00:00.191) 0:00:34.077 *** 2025-09-17 00:38:18.772253 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772263 | orchestrator | 2025-09-17 00:38:18.772274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772284 | orchestrator | Wednesday 17 September 2025 00:38:16 +0000 (0:00:00.156) 0:00:34.234 *** 2025-09-17 00:38:18.772294 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772305 | orchestrator | 2025-09-17 00:38:18.772315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772326 | orchestrator | Wednesday 17 September 2025 00:38:17 +0000 (0:00:00.495) 0:00:34.729 *** 2025-09-17 00:38:18.772336 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772347 | orchestrator | 2025-09-17 00:38:18.772357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772368 | orchestrator | Wednesday 17 September 2025 00:38:17 +0000 (0:00:00.188) 0:00:34.918 *** 2025-09-17 00:38:18.772378 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772388 | orchestrator | 2025-09-17 00:38:18.772399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772409 | orchestrator | Wednesday 17 September 2025 00:38:17 +0000 (0:00:00.176) 0:00:35.095 *** 2025-09-17 00:38:18.772420 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-17 00:38:18.772431 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-17 00:38:18.772441 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-17 00:38:18.772452 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-17 00:38:18.772463 | orchestrator | 2025-09-17 00:38:18.772473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772484 | orchestrator | Wednesday 17 September 2025 00:38:18 +0000 (0:00:00.561) 0:00:35.656 *** 2025-09-17 00:38:18.772494 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772505 | orchestrator | 2025-09-17 00:38:18.772515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772532 | orchestrator | Wednesday 17 September 2025 00:38:18 +0000 (0:00:00.165) 0:00:35.822 *** 2025-09-17 00:38:18.772543 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772553 | orchestrator | 2025-09-17 00:38:18.772564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772574 | orchestrator | Wednesday 17 September 2025 00:38:18 +0000 (0:00:00.167) 0:00:35.989 *** 2025-09-17 00:38:18.772585 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772595 | orchestrator | 2025-09-17 00:38:18.772606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:38:18.772617 | orchestrator | Wednesday 17 September 2025 00:38:18 +0000 (0:00:00.181) 0:00:36.170 *** 2025-09-17 00:38:18.772627 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:18.772638 | orchestrator | 2025-09-17 00:38:18.772648 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-17 00:38:18.772665 | orchestrator | Wednesday 17 September 2025 00:38:18 +0000 (0:00:00.175) 0:00:36.345 *** 2025-09-17 00:38:22.489736 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-17 00:38:22.489908 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-17 00:38:22.489925 | orchestrator | 2025-09-17 00:38:22.489947 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-17 00:38:22.489959 | orchestrator | Wednesday 17 September 2025 00:38:18 +0000 (0:00:00.153) 0:00:36.499 *** 2025-09-17 00:38:22.489970 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.489981 | orchestrator | 2025-09-17 00:38:22.489993 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-17 00:38:22.490004 | orchestrator | Wednesday 17 September 2025 00:38:19 +0000 (0:00:00.127) 0:00:36.626 *** 2025-09-17 00:38:22.490014 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.490071 | orchestrator | 2025-09-17 00:38:22.490082 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-17 00:38:22.490093 | orchestrator | Wednesday 17 September 2025 00:38:19 +0000 (0:00:00.128) 0:00:36.755 *** 2025-09-17 00:38:22.490103 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.490114 | orchestrator | 2025-09-17 00:38:22.490125 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-17 00:38:22.490136 | orchestrator | Wednesday 17 September 2025 00:38:19 +0000 (0:00:00.130) 0:00:36.886 *** 2025-09-17 00:38:22.490146 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:38:22.490158 | orchestrator | 2025-09-17 00:38:22.490169 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-17 00:38:22.490179 | orchestrator | Wednesday 17 September 2025 00:38:19 +0000 (0:00:00.261) 0:00:37.147 *** 2025-09-17 00:38:22.490191 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2dc6576b-ad92-58b3-afc8-22b8ce20905e'}}) 2025-09-17 00:38:22.490202 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a7b5a8de-6218-5c80-971a-bac3422a4161'}}) 2025-09-17 00:38:22.490213 | orchestrator | 2025-09-17 00:38:22.490224 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-17 00:38:22.490234 | orchestrator | Wednesday 17 September 2025 00:38:19 +0000 (0:00:00.165) 0:00:37.312 *** 2025-09-17 00:38:22.490246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2dc6576b-ad92-58b3-afc8-22b8ce20905e'}})  2025-09-17 00:38:22.490258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a7b5a8de-6218-5c80-971a-bac3422a4161'}})  2025-09-17 00:38:22.490272 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.490284 | orchestrator | 2025-09-17 00:38:22.490297 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-17 00:38:22.490309 | orchestrator | Wednesday 17 September 2025 00:38:19 +0000 (0:00:00.140) 0:00:37.453 *** 2025-09-17 00:38:22.490321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2dc6576b-ad92-58b3-afc8-22b8ce20905e'}})  2025-09-17 00:38:22.490363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a7b5a8de-6218-5c80-971a-bac3422a4161'}})  2025-09-17 00:38:22.490376 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.490389 | orchestrator | 2025-09-17 00:38:22.490401 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-17 00:38:22.490413 | orchestrator | Wednesday 17 September 2025 00:38:20 +0000 (0:00:00.138) 0:00:37.592 *** 2025-09-17 00:38:22.490425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2dc6576b-ad92-58b3-afc8-22b8ce20905e'}})  2025-09-17 00:38:22.490453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a7b5a8de-6218-5c80-971a-bac3422a4161'}})  2025-09-17 00:38:22.490466 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.490478 | orchestrator | 2025-09-17 00:38:22.490490 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-17 00:38:22.490502 | orchestrator | Wednesday 17 September 2025 00:38:20 +0000 (0:00:00.132) 0:00:37.724 *** 2025-09-17 00:38:22.490515 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:38:22.490527 | orchestrator | 2025-09-17 00:38:22.490539 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-17 00:38:22.490551 | orchestrator | Wednesday 17 September 2025 00:38:20 +0000 (0:00:00.122) 0:00:37.846 *** 2025-09-17 00:38:22.490563 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:38:22.490575 | orchestrator | 2025-09-17 00:38:22.490587 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-17 00:38:22.490599 | orchestrator | Wednesday 17 September 2025 00:38:20 +0000 (0:00:00.124) 0:00:37.971 *** 2025-09-17 00:38:22.490612 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.490624 | orchestrator | 2025-09-17 00:38:22.490636 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-17 00:38:22.490654 | orchestrator | Wednesday 17 September 2025 00:38:20 +0000 (0:00:00.122) 0:00:38.093 *** 2025-09-17 00:38:22.490665 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.490676 | orchestrator | 2025-09-17 00:38:22.490686 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-17 00:38:22.490696 | orchestrator | Wednesday 17 September 2025 00:38:20 +0000 (0:00:00.135) 0:00:38.229 *** 2025-09-17 00:38:22.490707 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.490718 | orchestrator | 2025-09-17 00:38:22.490728 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-17 00:38:22.490739 | orchestrator | Wednesday 17 September 2025 00:38:20 +0000 (0:00:00.115) 0:00:38.344 *** 2025-09-17 00:38:22.490750 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 00:38:22.490760 | orchestrator |  "ceph_osd_devices": { 2025-09-17 00:38:22.490771 | orchestrator |  "sdb": { 2025-09-17 00:38:22.490782 | orchestrator |  "osd_lvm_uuid": "2dc6576b-ad92-58b3-afc8-22b8ce20905e" 2025-09-17 00:38:22.490809 | orchestrator |  }, 2025-09-17 00:38:22.490821 | orchestrator |  "sdc": { 2025-09-17 00:38:22.490832 | orchestrator |  "osd_lvm_uuid": "a7b5a8de-6218-5c80-971a-bac3422a4161" 2025-09-17 00:38:22.490842 | orchestrator |  } 2025-09-17 00:38:22.490878 | orchestrator |  } 2025-09-17 00:38:22.490890 | orchestrator | } 2025-09-17 00:38:22.490901 | orchestrator | 2025-09-17 00:38:22.490911 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-17 00:38:22.490922 | orchestrator | Wednesday 17 September 2025 00:38:20 +0000 (0:00:00.112) 0:00:38.457 *** 2025-09-17 00:38:22.490932 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.490943 | orchestrator | 2025-09-17 00:38:22.490953 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-17 00:38:22.490964 | orchestrator | Wednesday 17 September 2025 00:38:20 +0000 (0:00:00.120) 0:00:38.577 *** 2025-09-17 00:38:22.490974 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.490985 | orchestrator | 2025-09-17 00:38:22.490995 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-17 00:38:22.491014 | orchestrator | Wednesday 17 September 2025 00:38:21 +0000 (0:00:00.242) 0:00:38.820 *** 2025-09-17 00:38:22.491024 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:38:22.491035 | orchestrator | 2025-09-17 00:38:22.491045 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-17 00:38:22.491056 | orchestrator | Wednesday 17 September 2025 00:38:21 +0000 (0:00:00.139) 0:00:38.959 *** 2025-09-17 00:38:22.491067 | orchestrator | changed: [testbed-node-5] => { 2025-09-17 00:38:22.491077 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-17 00:38:22.491088 | orchestrator |  "ceph_osd_devices": { 2025-09-17 00:38:22.491098 | orchestrator |  "sdb": { 2025-09-17 00:38:22.491109 | orchestrator |  "osd_lvm_uuid": "2dc6576b-ad92-58b3-afc8-22b8ce20905e" 2025-09-17 00:38:22.491120 | orchestrator |  }, 2025-09-17 00:38:22.491130 | orchestrator |  "sdc": { 2025-09-17 00:38:22.491141 | orchestrator |  "osd_lvm_uuid": "a7b5a8de-6218-5c80-971a-bac3422a4161" 2025-09-17 00:38:22.491151 | orchestrator |  } 2025-09-17 00:38:22.491162 | orchestrator |  }, 2025-09-17 00:38:22.491172 | orchestrator |  "lvm_volumes": [ 2025-09-17 00:38:22.491183 | orchestrator |  { 2025-09-17 00:38:22.491193 | orchestrator |  "data": "osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e", 2025-09-17 00:38:22.491204 | orchestrator |  "data_vg": "ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e" 2025-09-17 00:38:22.491214 | orchestrator |  }, 2025-09-17 00:38:22.491225 | orchestrator |  { 2025-09-17 00:38:22.491235 | orchestrator |  "data": "osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161", 2025-09-17 00:38:22.491246 | orchestrator |  "data_vg": "ceph-a7b5a8de-6218-5c80-971a-bac3422a4161" 2025-09-17 00:38:22.491257 | orchestrator |  } 2025-09-17 00:38:22.491267 | orchestrator |  ] 2025-09-17 00:38:22.491278 | orchestrator |  } 2025-09-17 00:38:22.491293 | orchestrator | } 2025-09-17 00:38:22.491304 | orchestrator | 2025-09-17 00:38:22.491314 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-17 00:38:22.491325 | orchestrator | Wednesday 17 September 2025 00:38:21 +0000 (0:00:00.187) 0:00:39.147 *** 2025-09-17 00:38:22.491336 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-17 00:38:22.491346 | orchestrator | 2025-09-17 00:38:22.491357 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:38:22.491368 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 00:38:22.491380 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 00:38:22.491391 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 00:38:22.491402 | orchestrator | 2025-09-17 00:38:22.491412 | orchestrator | 2025-09-17 00:38:22.491423 | orchestrator | 2025-09-17 00:38:22.491433 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:38:22.491444 | orchestrator | Wednesday 17 September 2025 00:38:22 +0000 (0:00:00.904) 0:00:40.051 *** 2025-09-17 00:38:22.491454 | orchestrator | =============================================================================== 2025-09-17 00:38:22.491465 | orchestrator | Write configuration file ------------------------------------------------ 4.07s 2025-09-17 00:38:22.491475 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2025-09-17 00:38:22.491485 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-09-17 00:38:22.491496 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2025-09-17 00:38:22.491507 | orchestrator | Get initial list of available block devices ----------------------------- 1.03s 2025-09-17 00:38:22.491524 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.98s 2025-09-17 00:38:22.491534 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.71s 2025-09-17 00:38:22.491545 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-09-17 00:38:22.491555 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-09-17 00:38:22.491566 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.68s 2025-09-17 00:38:22.491576 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-09-17 00:38:22.491587 | orchestrator | Set WAL devices config data --------------------------------------------- 0.66s 2025-09-17 00:38:22.491597 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-09-17 00:38:22.491608 | orchestrator | Print configuration data ------------------------------------------------ 0.61s 2025-09-17 00:38:22.491626 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-09-17 00:38:22.737228 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-09-17 00:38:22.737318 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-09-17 00:38:22.737330 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2025-09-17 00:38:22.737341 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.54s 2025-09-17 00:38:22.737351 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.52s 2025-09-17 00:38:45.386722 | orchestrator | 2025-09-17 00:38:45 | INFO  | Task 25f24b10-f458-4f3d-9139-4a41c5d30a68 (sync inventory) is running in background. Output coming soon. 2025-09-17 00:39:10.279779 | orchestrator | 2025-09-17 00:38:47 | INFO  | Starting group_vars file reorganization 2025-09-17 00:39:10.279954 | orchestrator | 2025-09-17 00:38:47 | INFO  | Moved 0 file(s) to their respective directories 2025-09-17 00:39:10.279973 | orchestrator | 2025-09-17 00:38:47 | INFO  | Group_vars file reorganization completed 2025-09-17 00:39:10.279985 | orchestrator | 2025-09-17 00:38:50 | INFO  | Starting variable preparation from inventory 2025-09-17 00:39:10.279996 | orchestrator | 2025-09-17 00:38:53 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-17 00:39:10.280007 | orchestrator | 2025-09-17 00:38:53 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-17 00:39:10.280018 | orchestrator | 2025-09-17 00:38:53 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-17 00:39:10.280050 | orchestrator | 2025-09-17 00:38:53 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-17 00:39:10.280062 | orchestrator | 2025-09-17 00:38:53 | INFO  | Variable preparation completed 2025-09-17 00:39:10.280073 | orchestrator | 2025-09-17 00:38:54 | INFO  | Starting inventory overwrite handling 2025-09-17 00:39:10.280084 | orchestrator | 2025-09-17 00:38:54 | INFO  | Handling group overwrites in 99-overwrite 2025-09-17 00:39:10.280101 | orchestrator | 2025-09-17 00:38:54 | INFO  | Removing group frr:children from 60-generic 2025-09-17 00:39:10.280112 | orchestrator | 2025-09-17 00:38:54 | INFO  | Removing group storage:children from 50-kolla 2025-09-17 00:39:10.280123 | orchestrator | 2025-09-17 00:38:54 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-17 00:39:10.280134 | orchestrator | 2025-09-17 00:38:54 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-17 00:39:10.280145 | orchestrator | 2025-09-17 00:38:54 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-17 00:39:10.280156 | orchestrator | 2025-09-17 00:38:54 | INFO  | Handling group overwrites in 20-roles 2025-09-17 00:39:10.280167 | orchestrator | 2025-09-17 00:38:54 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-17 00:39:10.280200 | orchestrator | 2025-09-17 00:38:54 | INFO  | Removed 6 group(s) in total 2025-09-17 00:39:10.280211 | orchestrator | 2025-09-17 00:38:54 | INFO  | Inventory overwrite handling completed 2025-09-17 00:39:10.280222 | orchestrator | 2025-09-17 00:38:55 | INFO  | Starting merge of inventory files 2025-09-17 00:39:10.280240 | orchestrator | 2025-09-17 00:38:55 | INFO  | Inventory files merged successfully 2025-09-17 00:39:10.280259 | orchestrator | 2025-09-17 00:38:59 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-17 00:39:10.280271 | orchestrator | 2025-09-17 00:39:09 | INFO  | Successfully wrote ClusterShell configuration 2025-09-17 00:39:10.280282 | orchestrator | [master c70487e] 2025-09-17-00-39 2025-09-17 00:39:10.280294 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-17 00:39:12.421841 | orchestrator | 2025-09-17 00:39:12 | INFO  | Task 367745cc-3b5f-494b-b780-db56b1192569 (ceph-create-lvm-devices) was prepared for execution. 2025-09-17 00:39:12.421971 | orchestrator | 2025-09-17 00:39:12 | INFO  | It takes a moment until task 367745cc-3b5f-494b-b780-db56b1192569 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-17 00:39:22.242847 | orchestrator | 2025-09-17 00:39:22.243002 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-17 00:39:22.243015 | orchestrator | 2025-09-17 00:39:22.243025 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 00:39:22.243036 | orchestrator | Wednesday 17 September 2025 00:39:15 +0000 (0:00:00.375) 0:00:00.375 *** 2025-09-17 00:39:22.243045 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 00:39:22.243054 | orchestrator | 2025-09-17 00:39:22.243063 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 00:39:22.243072 | orchestrator | Wednesday 17 September 2025 00:39:16 +0000 (0:00:00.221) 0:00:00.596 *** 2025-09-17 00:39:22.243080 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:39:22.243090 | orchestrator | 2025-09-17 00:39:22.243099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243107 | orchestrator | Wednesday 17 September 2025 00:39:16 +0000 (0:00:00.184) 0:00:00.780 *** 2025-09-17 00:39:22.243116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-17 00:39:22.243126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-17 00:39:22.243135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-17 00:39:22.243144 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-17 00:39:22.243152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-17 00:39:22.243161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-17 00:39:22.243169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-17 00:39:22.243178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-17 00:39:22.243187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-17 00:39:22.243196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-17 00:39:22.243204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-17 00:39:22.243213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-17 00:39:22.243221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-17 00:39:22.243230 | orchestrator | 2025-09-17 00:39:22.243239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243268 | orchestrator | Wednesday 17 September 2025 00:39:16 +0000 (0:00:00.346) 0:00:01.127 *** 2025-09-17 00:39:22.243278 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.243286 | orchestrator | 2025-09-17 00:39:22.243295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243303 | orchestrator | Wednesday 17 September 2025 00:39:16 +0000 (0:00:00.344) 0:00:01.471 *** 2025-09-17 00:39:22.243312 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.243320 | orchestrator | 2025-09-17 00:39:22.243329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243338 | orchestrator | Wednesday 17 September 2025 00:39:17 +0000 (0:00:00.180) 0:00:01.652 *** 2025-09-17 00:39:22.243347 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.243355 | orchestrator | 2025-09-17 00:39:22.243364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243372 | orchestrator | Wednesday 17 September 2025 00:39:17 +0000 (0:00:00.187) 0:00:01.839 *** 2025-09-17 00:39:22.243381 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.243389 | orchestrator | 2025-09-17 00:39:22.243398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243408 | orchestrator | Wednesday 17 September 2025 00:39:17 +0000 (0:00:00.162) 0:00:02.002 *** 2025-09-17 00:39:22.243417 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.243427 | orchestrator | 2025-09-17 00:39:22.243436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243446 | orchestrator | Wednesday 17 September 2025 00:39:17 +0000 (0:00:00.171) 0:00:02.173 *** 2025-09-17 00:39:22.243456 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.243466 | orchestrator | 2025-09-17 00:39:22.243475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243484 | orchestrator | Wednesday 17 September 2025 00:39:17 +0000 (0:00:00.184) 0:00:02.358 *** 2025-09-17 00:39:22.243494 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.243504 | orchestrator | 2025-09-17 00:39:22.243514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243524 | orchestrator | Wednesday 17 September 2025 00:39:17 +0000 (0:00:00.173) 0:00:02.532 *** 2025-09-17 00:39:22.243535 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.243545 | orchestrator | 2025-09-17 00:39:22.243555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243564 | orchestrator | Wednesday 17 September 2025 00:39:18 +0000 (0:00:00.167) 0:00:02.699 *** 2025-09-17 00:39:22.243574 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130) 2025-09-17 00:39:22.243585 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130) 2025-09-17 00:39:22.243594 | orchestrator | 2025-09-17 00:39:22.243605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243614 | orchestrator | Wednesday 17 September 2025 00:39:18 +0000 (0:00:00.359) 0:00:03.059 *** 2025-09-17 00:39:22.243638 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f) 2025-09-17 00:39:22.243648 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f) 2025-09-17 00:39:22.243657 | orchestrator | 2025-09-17 00:39:22.243665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243674 | orchestrator | Wednesday 17 September 2025 00:39:18 +0000 (0:00:00.353) 0:00:03.413 *** 2025-09-17 00:39:22.243682 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb) 2025-09-17 00:39:22.243691 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb) 2025-09-17 00:39:22.243700 | orchestrator | 2025-09-17 00:39:22.243708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243723 | orchestrator | Wednesday 17 September 2025 00:39:19 +0000 (0:00:00.510) 0:00:03.923 *** 2025-09-17 00:39:22.243732 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a) 2025-09-17 00:39:22.243741 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a) 2025-09-17 00:39:22.243749 | orchestrator | 2025-09-17 00:39:22.243757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:22.243766 | orchestrator | Wednesday 17 September 2025 00:39:20 +0000 (0:00:00.717) 0:00:04.640 *** 2025-09-17 00:39:22.243775 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 00:39:22.243784 | orchestrator | 2025-09-17 00:39:22.243792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:22.243801 | orchestrator | Wednesday 17 September 2025 00:39:20 +0000 (0:00:00.316) 0:00:04.957 *** 2025-09-17 00:39:22.243809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-17 00:39:22.243817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-17 00:39:22.243826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-17 00:39:22.243834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-17 00:39:22.243876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-17 00:39:22.243886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-17 00:39:22.243895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-17 00:39:22.243903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-17 00:39:22.243912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-17 00:39:22.243920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-17 00:39:22.243929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-17 00:39:22.243938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-17 00:39:22.243950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-17 00:39:22.243959 | orchestrator | 2025-09-17 00:39:22.243968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:22.243976 | orchestrator | Wednesday 17 September 2025 00:39:20 +0000 (0:00:00.335) 0:00:05.292 *** 2025-09-17 00:39:22.243985 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.243993 | orchestrator | 2025-09-17 00:39:22.244002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:22.244011 | orchestrator | Wednesday 17 September 2025 00:39:20 +0000 (0:00:00.195) 0:00:05.487 *** 2025-09-17 00:39:22.244019 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.244027 | orchestrator | 2025-09-17 00:39:22.244036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:22.244044 | orchestrator | Wednesday 17 September 2025 00:39:21 +0000 (0:00:00.183) 0:00:05.671 *** 2025-09-17 00:39:22.244053 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.244061 | orchestrator | 2025-09-17 00:39:22.244070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:22.244078 | orchestrator | Wednesday 17 September 2025 00:39:21 +0000 (0:00:00.182) 0:00:05.854 *** 2025-09-17 00:39:22.244087 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.244095 | orchestrator | 2025-09-17 00:39:22.244104 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:22.244118 | orchestrator | Wednesday 17 September 2025 00:39:21 +0000 (0:00:00.191) 0:00:06.045 *** 2025-09-17 00:39:22.244127 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.244135 | orchestrator | 2025-09-17 00:39:22.244144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:22.244152 | orchestrator | Wednesday 17 September 2025 00:39:21 +0000 (0:00:00.186) 0:00:06.231 *** 2025-09-17 00:39:22.244161 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.244169 | orchestrator | 2025-09-17 00:39:22.244178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:22.244187 | orchestrator | Wednesday 17 September 2025 00:39:21 +0000 (0:00:00.189) 0:00:06.420 *** 2025-09-17 00:39:22.244195 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:22.244204 | orchestrator | 2025-09-17 00:39:22.244212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:22.244221 | orchestrator | Wednesday 17 September 2025 00:39:22 +0000 (0:00:00.173) 0:00:06.594 *** 2025-09-17 00:39:22.244234 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609001 | orchestrator | 2025-09-17 00:39:29.609112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:29.609126 | orchestrator | Wednesday 17 September 2025 00:39:22 +0000 (0:00:00.180) 0:00:06.775 *** 2025-09-17 00:39:29.609137 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-17 00:39:29.609149 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-17 00:39:29.609159 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-17 00:39:29.609168 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-17 00:39:29.609178 | orchestrator | 2025-09-17 00:39:29.609188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:29.609198 | orchestrator | Wednesday 17 September 2025 00:39:23 +0000 (0:00:00.834) 0:00:07.609 *** 2025-09-17 00:39:29.609208 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609218 | orchestrator | 2025-09-17 00:39:29.609227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:29.609237 | orchestrator | Wednesday 17 September 2025 00:39:23 +0000 (0:00:00.197) 0:00:07.807 *** 2025-09-17 00:39:29.609246 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609256 | orchestrator | 2025-09-17 00:39:29.609266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:29.609275 | orchestrator | Wednesday 17 September 2025 00:39:23 +0000 (0:00:00.205) 0:00:08.012 *** 2025-09-17 00:39:29.609285 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609294 | orchestrator | 2025-09-17 00:39:29.609304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:29.609314 | orchestrator | Wednesday 17 September 2025 00:39:23 +0000 (0:00:00.215) 0:00:08.228 *** 2025-09-17 00:39:29.609324 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609333 | orchestrator | 2025-09-17 00:39:29.609343 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-17 00:39:29.609352 | orchestrator | Wednesday 17 September 2025 00:39:23 +0000 (0:00:00.184) 0:00:08.413 *** 2025-09-17 00:39:29.609362 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609373 | orchestrator | 2025-09-17 00:39:29.609390 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-17 00:39:29.609406 | orchestrator | Wednesday 17 September 2025 00:39:24 +0000 (0:00:00.128) 0:00:08.542 *** 2025-09-17 00:39:29.609422 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'}}) 2025-09-17 00:39:29.609438 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'}}) 2025-09-17 00:39:29.609454 | orchestrator | 2025-09-17 00:39:29.609470 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-17 00:39:29.609486 | orchestrator | Wednesday 17 September 2025 00:39:24 +0000 (0:00:00.163) 0:00:08.705 *** 2025-09-17 00:39:29.609504 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'}) 2025-09-17 00:39:29.609545 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'}) 2025-09-17 00:39:29.609558 | orchestrator | 2025-09-17 00:39:29.609570 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-17 00:39:29.609581 | orchestrator | Wednesday 17 September 2025 00:39:26 +0000 (0:00:01.964) 0:00:10.669 *** 2025-09-17 00:39:29.609592 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:29.609605 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:29.609616 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609626 | orchestrator | 2025-09-17 00:39:29.609637 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-17 00:39:29.609648 | orchestrator | Wednesday 17 September 2025 00:39:26 +0000 (0:00:00.132) 0:00:10.802 *** 2025-09-17 00:39:29.609659 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'}) 2025-09-17 00:39:29.609670 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'}) 2025-09-17 00:39:29.609680 | orchestrator | 2025-09-17 00:39:29.609691 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-17 00:39:29.609702 | orchestrator | Wednesday 17 September 2025 00:39:27 +0000 (0:00:01.392) 0:00:12.194 *** 2025-09-17 00:39:29.609713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:29.609724 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:29.609735 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609745 | orchestrator | 2025-09-17 00:39:29.609757 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-17 00:39:29.609767 | orchestrator | Wednesday 17 September 2025 00:39:27 +0000 (0:00:00.136) 0:00:12.331 *** 2025-09-17 00:39:29.609778 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609789 | orchestrator | 2025-09-17 00:39:29.609800 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-17 00:39:29.609827 | orchestrator | Wednesday 17 September 2025 00:39:27 +0000 (0:00:00.125) 0:00:12.456 *** 2025-09-17 00:39:29.609839 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:29.609848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:29.609884 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609895 | orchestrator | 2025-09-17 00:39:29.609904 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-17 00:39:29.609913 | orchestrator | Wednesday 17 September 2025 00:39:28 +0000 (0:00:00.260) 0:00:12.716 *** 2025-09-17 00:39:29.609923 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609932 | orchestrator | 2025-09-17 00:39:29.609941 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-17 00:39:29.609951 | orchestrator | Wednesday 17 September 2025 00:39:28 +0000 (0:00:00.121) 0:00:12.838 *** 2025-09-17 00:39:29.609960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:29.609977 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:29.609986 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.609996 | orchestrator | 2025-09-17 00:39:29.610005 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-17 00:39:29.610060 | orchestrator | Wednesday 17 September 2025 00:39:28 +0000 (0:00:00.125) 0:00:12.963 *** 2025-09-17 00:39:29.610072 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.610081 | orchestrator | 2025-09-17 00:39:29.610090 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-17 00:39:29.610100 | orchestrator | Wednesday 17 September 2025 00:39:28 +0000 (0:00:00.129) 0:00:13.093 *** 2025-09-17 00:39:29.610109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:29.610119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:29.610128 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.610138 | orchestrator | 2025-09-17 00:39:29.610147 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-17 00:39:29.610157 | orchestrator | Wednesday 17 September 2025 00:39:28 +0000 (0:00:00.152) 0:00:13.245 *** 2025-09-17 00:39:29.610166 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:39:29.610176 | orchestrator | 2025-09-17 00:39:29.610188 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-17 00:39:29.610204 | orchestrator | Wednesday 17 September 2025 00:39:28 +0000 (0:00:00.182) 0:00:13.428 *** 2025-09-17 00:39:29.610251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:29.610275 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:29.610292 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.610306 | orchestrator | 2025-09-17 00:39:29.610315 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-17 00:39:29.610325 | orchestrator | Wednesday 17 September 2025 00:39:29 +0000 (0:00:00.158) 0:00:13.587 *** 2025-09-17 00:39:29.610334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:29.610344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:29.610353 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.610362 | orchestrator | 2025-09-17 00:39:29.610372 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-17 00:39:29.610381 | orchestrator | Wednesday 17 September 2025 00:39:29 +0000 (0:00:00.158) 0:00:13.746 *** 2025-09-17 00:39:29.610391 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:29.610400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:29.610410 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.610419 | orchestrator | 2025-09-17 00:39:29.610428 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-17 00:39:29.610438 | orchestrator | Wednesday 17 September 2025 00:39:29 +0000 (0:00:00.144) 0:00:13.890 *** 2025-09-17 00:39:29.610447 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.610465 | orchestrator | 2025-09-17 00:39:29.610475 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-17 00:39:29.610484 | orchestrator | Wednesday 17 September 2025 00:39:29 +0000 (0:00:00.126) 0:00:14.017 *** 2025-09-17 00:39:29.610509 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:29.610519 | orchestrator | 2025-09-17 00:39:29.610537 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-17 00:39:36.308595 | orchestrator | Wednesday 17 September 2025 00:39:29 +0000 (0:00:00.123) 0:00:14.140 *** 2025-09-17 00:39:36.308710 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.308725 | orchestrator | 2025-09-17 00:39:36.308738 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-17 00:39:36.308750 | orchestrator | Wednesday 17 September 2025 00:39:29 +0000 (0:00:00.126) 0:00:14.267 *** 2025-09-17 00:39:36.308761 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 00:39:36.308772 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-17 00:39:36.308783 | orchestrator | } 2025-09-17 00:39:36.308795 | orchestrator | 2025-09-17 00:39:36.308806 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-17 00:39:36.308817 | orchestrator | Wednesday 17 September 2025 00:39:29 +0000 (0:00:00.251) 0:00:14.518 *** 2025-09-17 00:39:36.308828 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 00:39:36.308839 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-17 00:39:36.308849 | orchestrator | } 2025-09-17 00:39:36.308915 | orchestrator | 2025-09-17 00:39:36.308927 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-17 00:39:36.308938 | orchestrator | Wednesday 17 September 2025 00:39:30 +0000 (0:00:00.142) 0:00:14.660 *** 2025-09-17 00:39:36.308948 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 00:39:36.308959 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-17 00:39:36.308970 | orchestrator | } 2025-09-17 00:39:36.308981 | orchestrator | 2025-09-17 00:39:36.308993 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-17 00:39:36.309004 | orchestrator | Wednesday 17 September 2025 00:39:30 +0000 (0:00:00.142) 0:00:14.803 *** 2025-09-17 00:39:36.309014 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:39:36.309025 | orchestrator | 2025-09-17 00:39:36.309036 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-17 00:39:36.309047 | orchestrator | Wednesday 17 September 2025 00:39:30 +0000 (0:00:00.665) 0:00:15.469 *** 2025-09-17 00:39:36.309057 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:39:36.309068 | orchestrator | 2025-09-17 00:39:36.309079 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-17 00:39:36.309089 | orchestrator | Wednesday 17 September 2025 00:39:31 +0000 (0:00:00.522) 0:00:15.991 *** 2025-09-17 00:39:36.309100 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:39:36.309111 | orchestrator | 2025-09-17 00:39:36.309121 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-17 00:39:36.309133 | orchestrator | Wednesday 17 September 2025 00:39:32 +0000 (0:00:00.548) 0:00:16.539 *** 2025-09-17 00:39:36.309145 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:39:36.309157 | orchestrator | 2025-09-17 00:39:36.309169 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-17 00:39:36.309182 | orchestrator | Wednesday 17 September 2025 00:39:32 +0000 (0:00:00.178) 0:00:16.718 *** 2025-09-17 00:39:36.309194 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309206 | orchestrator | 2025-09-17 00:39:36.309219 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-17 00:39:36.309231 | orchestrator | Wednesday 17 September 2025 00:39:32 +0000 (0:00:00.156) 0:00:16.875 *** 2025-09-17 00:39:36.309243 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309254 | orchestrator | 2025-09-17 00:39:36.309264 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-17 00:39:36.309275 | orchestrator | Wednesday 17 September 2025 00:39:32 +0000 (0:00:00.141) 0:00:17.017 *** 2025-09-17 00:39:36.309286 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 00:39:36.309321 | orchestrator |  "vgs_report": { 2025-09-17 00:39:36.309349 | orchestrator |  "vg": [] 2025-09-17 00:39:36.309361 | orchestrator |  } 2025-09-17 00:39:36.309371 | orchestrator | } 2025-09-17 00:39:36.309382 | orchestrator | 2025-09-17 00:39:36.309393 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-17 00:39:36.309404 | orchestrator | Wednesday 17 September 2025 00:39:32 +0000 (0:00:00.183) 0:00:17.200 *** 2025-09-17 00:39:36.309415 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309426 | orchestrator | 2025-09-17 00:39:36.309437 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-17 00:39:36.309447 | orchestrator | Wednesday 17 September 2025 00:39:32 +0000 (0:00:00.128) 0:00:17.329 *** 2025-09-17 00:39:36.309458 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309469 | orchestrator | 2025-09-17 00:39:36.309479 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-17 00:39:36.309490 | orchestrator | Wednesday 17 September 2025 00:39:32 +0000 (0:00:00.138) 0:00:17.467 *** 2025-09-17 00:39:36.309501 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309512 | orchestrator | 2025-09-17 00:39:36.309522 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-17 00:39:36.309533 | orchestrator | Wednesday 17 September 2025 00:39:33 +0000 (0:00:00.353) 0:00:17.821 *** 2025-09-17 00:39:36.309544 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309555 | orchestrator | 2025-09-17 00:39:36.309565 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-17 00:39:36.309576 | orchestrator | Wednesday 17 September 2025 00:39:33 +0000 (0:00:00.140) 0:00:17.961 *** 2025-09-17 00:39:36.309587 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309598 | orchestrator | 2025-09-17 00:39:36.309609 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-17 00:39:36.309620 | orchestrator | Wednesday 17 September 2025 00:39:33 +0000 (0:00:00.138) 0:00:18.100 *** 2025-09-17 00:39:36.309631 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309641 | orchestrator | 2025-09-17 00:39:36.309652 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-17 00:39:36.309663 | orchestrator | Wednesday 17 September 2025 00:39:33 +0000 (0:00:00.134) 0:00:18.234 *** 2025-09-17 00:39:36.309673 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309684 | orchestrator | 2025-09-17 00:39:36.309695 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-17 00:39:36.309706 | orchestrator | Wednesday 17 September 2025 00:39:33 +0000 (0:00:00.156) 0:00:18.391 *** 2025-09-17 00:39:36.309717 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309727 | orchestrator | 2025-09-17 00:39:36.309738 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-17 00:39:36.309767 | orchestrator | Wednesday 17 September 2025 00:39:34 +0000 (0:00:00.176) 0:00:18.567 *** 2025-09-17 00:39:36.309779 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309790 | orchestrator | 2025-09-17 00:39:36.309801 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-17 00:39:36.309812 | orchestrator | Wednesday 17 September 2025 00:39:34 +0000 (0:00:00.141) 0:00:18.709 *** 2025-09-17 00:39:36.309822 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309833 | orchestrator | 2025-09-17 00:39:36.309844 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-17 00:39:36.309876 | orchestrator | Wednesday 17 September 2025 00:39:34 +0000 (0:00:00.161) 0:00:18.870 *** 2025-09-17 00:39:36.309888 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309898 | orchestrator | 2025-09-17 00:39:36.309909 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-17 00:39:36.309920 | orchestrator | Wednesday 17 September 2025 00:39:34 +0000 (0:00:00.189) 0:00:19.060 *** 2025-09-17 00:39:36.309931 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309942 | orchestrator | 2025-09-17 00:39:36.309961 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-17 00:39:36.309972 | orchestrator | Wednesday 17 September 2025 00:39:34 +0000 (0:00:00.150) 0:00:19.210 *** 2025-09-17 00:39:36.309983 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.309994 | orchestrator | 2025-09-17 00:39:36.310005 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-17 00:39:36.310069 | orchestrator | Wednesday 17 September 2025 00:39:34 +0000 (0:00:00.192) 0:00:19.402 *** 2025-09-17 00:39:36.310081 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.310092 | orchestrator | 2025-09-17 00:39:36.310103 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-17 00:39:36.310113 | orchestrator | Wednesday 17 September 2025 00:39:35 +0000 (0:00:00.159) 0:00:19.562 *** 2025-09-17 00:39:36.310125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:36.310138 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:36.310149 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.310160 | orchestrator | 2025-09-17 00:39:36.310170 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-17 00:39:36.310181 | orchestrator | Wednesday 17 September 2025 00:39:35 +0000 (0:00:00.408) 0:00:19.970 *** 2025-09-17 00:39:36.310192 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:36.310203 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:36.310214 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.310225 | orchestrator | 2025-09-17 00:39:36.310236 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-17 00:39:36.310246 | orchestrator | Wednesday 17 September 2025 00:39:35 +0000 (0:00:00.182) 0:00:20.153 *** 2025-09-17 00:39:36.310257 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:36.310269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:36.310279 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.310290 | orchestrator | 2025-09-17 00:39:36.310301 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-17 00:39:36.310312 | orchestrator | Wednesday 17 September 2025 00:39:35 +0000 (0:00:00.211) 0:00:20.364 *** 2025-09-17 00:39:36.310323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:36.310334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:36.310345 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.310356 | orchestrator | 2025-09-17 00:39:36.310366 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-17 00:39:36.310377 | orchestrator | Wednesday 17 September 2025 00:39:35 +0000 (0:00:00.152) 0:00:20.516 *** 2025-09-17 00:39:36.310388 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:36.310399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:36.310410 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:36.310427 | orchestrator | 2025-09-17 00:39:36.310438 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-17 00:39:36.310449 | orchestrator | Wednesday 17 September 2025 00:39:36 +0000 (0:00:00.151) 0:00:20.668 *** 2025-09-17 00:39:36.310469 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:36.310487 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:41.632682 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:41.632790 | orchestrator | 2025-09-17 00:39:41.632805 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-17 00:39:41.632819 | orchestrator | Wednesday 17 September 2025 00:39:36 +0000 (0:00:00.170) 0:00:20.838 *** 2025-09-17 00:39:41.632830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:41.632842 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:41.632906 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:41.632918 | orchestrator | 2025-09-17 00:39:41.632930 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-17 00:39:41.632940 | orchestrator | Wednesday 17 September 2025 00:39:36 +0000 (0:00:00.189) 0:00:21.028 *** 2025-09-17 00:39:41.632951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:41.632962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:41.632973 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:41.632984 | orchestrator | 2025-09-17 00:39:41.632995 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-17 00:39:41.633006 | orchestrator | Wednesday 17 September 2025 00:39:36 +0000 (0:00:00.207) 0:00:21.236 *** 2025-09-17 00:39:41.633016 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:39:41.633028 | orchestrator | 2025-09-17 00:39:41.633038 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-17 00:39:41.633049 | orchestrator | Wednesday 17 September 2025 00:39:37 +0000 (0:00:00.527) 0:00:21.763 *** 2025-09-17 00:39:41.633059 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:39:41.633070 | orchestrator | 2025-09-17 00:39:41.633080 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-17 00:39:41.633091 | orchestrator | Wednesday 17 September 2025 00:39:37 +0000 (0:00:00.561) 0:00:22.325 *** 2025-09-17 00:39:41.633101 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:39:41.633112 | orchestrator | 2025-09-17 00:39:41.633122 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-17 00:39:41.633133 | orchestrator | Wednesday 17 September 2025 00:39:37 +0000 (0:00:00.143) 0:00:22.468 *** 2025-09-17 00:39:41.633144 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'vg_name': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'}) 2025-09-17 00:39:41.633155 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'vg_name': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'}) 2025-09-17 00:39:41.633166 | orchestrator | 2025-09-17 00:39:41.633192 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-17 00:39:41.633203 | orchestrator | Wednesday 17 September 2025 00:39:38 +0000 (0:00:00.166) 0:00:22.635 *** 2025-09-17 00:39:41.633214 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:41.633249 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:41.633260 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:41.633271 | orchestrator | 2025-09-17 00:39:41.633282 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-17 00:39:41.633292 | orchestrator | Wednesday 17 September 2025 00:39:38 +0000 (0:00:00.276) 0:00:22.911 *** 2025-09-17 00:39:41.633303 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:41.633313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:41.633324 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:41.633335 | orchestrator | 2025-09-17 00:39:41.633345 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-17 00:39:41.633356 | orchestrator | Wednesday 17 September 2025 00:39:38 +0000 (0:00:00.159) 0:00:23.071 *** 2025-09-17 00:39:41.633367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'})  2025-09-17 00:39:41.633378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'})  2025-09-17 00:39:41.633389 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:39:41.633399 | orchestrator | 2025-09-17 00:39:41.633410 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-17 00:39:41.633421 | orchestrator | Wednesday 17 September 2025 00:39:38 +0000 (0:00:00.160) 0:00:23.231 *** 2025-09-17 00:39:41.633431 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 00:39:41.633442 | orchestrator |  "lvm_report": { 2025-09-17 00:39:41.633453 | orchestrator |  "lv": [ 2025-09-17 00:39:41.633464 | orchestrator |  { 2025-09-17 00:39:41.633490 | orchestrator |  "lv_name": "osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac", 2025-09-17 00:39:41.633502 | orchestrator |  "vg_name": "ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac" 2025-09-17 00:39:41.633513 | orchestrator |  }, 2025-09-17 00:39:41.633524 | orchestrator |  { 2025-09-17 00:39:41.633534 | orchestrator |  "lv_name": "osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15", 2025-09-17 00:39:41.633545 | orchestrator |  "vg_name": "ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15" 2025-09-17 00:39:41.633556 | orchestrator |  } 2025-09-17 00:39:41.633566 | orchestrator |  ], 2025-09-17 00:39:41.633577 | orchestrator |  "pv": [ 2025-09-17 00:39:41.633587 | orchestrator |  { 2025-09-17 00:39:41.633598 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-17 00:39:41.633609 | orchestrator |  "vg_name": "ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac" 2025-09-17 00:39:41.633619 | orchestrator |  }, 2025-09-17 00:39:41.633630 | orchestrator |  { 2025-09-17 00:39:41.633640 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-17 00:39:41.633651 | orchestrator |  "vg_name": "ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15" 2025-09-17 00:39:41.633662 | orchestrator |  } 2025-09-17 00:39:41.633672 | orchestrator |  ] 2025-09-17 00:39:41.633683 | orchestrator |  } 2025-09-17 00:39:41.633694 | orchestrator | } 2025-09-17 00:39:41.633704 | orchestrator | 2025-09-17 00:39:41.633715 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-17 00:39:41.633726 | orchestrator | 2025-09-17 00:39:41.633737 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 00:39:41.633747 | orchestrator | Wednesday 17 September 2025 00:39:38 +0000 (0:00:00.290) 0:00:23.521 *** 2025-09-17 00:39:41.633758 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-17 00:39:41.633776 | orchestrator | 2025-09-17 00:39:41.633787 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 00:39:41.633797 | orchestrator | Wednesday 17 September 2025 00:39:39 +0000 (0:00:00.266) 0:00:23.788 *** 2025-09-17 00:39:41.633808 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:39:41.633819 | orchestrator | 2025-09-17 00:39:41.633829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:41.633840 | orchestrator | Wednesday 17 September 2025 00:39:39 +0000 (0:00:00.232) 0:00:24.021 *** 2025-09-17 00:39:41.633850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-17 00:39:41.633881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-17 00:39:41.633892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-17 00:39:41.633903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-17 00:39:41.633914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-17 00:39:41.633924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-17 00:39:41.633935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-17 00:39:41.633951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-17 00:39:41.633962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-17 00:39:41.633973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-17 00:39:41.633983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-17 00:39:41.633994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-17 00:39:41.634004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-17 00:39:41.634079 | orchestrator | 2025-09-17 00:39:41.634091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:41.634102 | orchestrator | Wednesday 17 September 2025 00:39:39 +0000 (0:00:00.403) 0:00:24.424 *** 2025-09-17 00:39:41.634113 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:41.634123 | orchestrator | 2025-09-17 00:39:41.634134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:41.634144 | orchestrator | Wednesday 17 September 2025 00:39:40 +0000 (0:00:00.224) 0:00:24.648 *** 2025-09-17 00:39:41.634155 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:41.634166 | orchestrator | 2025-09-17 00:39:41.634176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:41.634187 | orchestrator | Wednesday 17 September 2025 00:39:40 +0000 (0:00:00.212) 0:00:24.860 *** 2025-09-17 00:39:41.634197 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:41.634208 | orchestrator | 2025-09-17 00:39:41.634219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:41.634229 | orchestrator | Wednesday 17 September 2025 00:39:40 +0000 (0:00:00.503) 0:00:25.364 *** 2025-09-17 00:39:41.634240 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:41.634250 | orchestrator | 2025-09-17 00:39:41.634261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:41.634272 | orchestrator | Wednesday 17 September 2025 00:39:41 +0000 (0:00:00.198) 0:00:25.562 *** 2025-09-17 00:39:41.634282 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:41.634293 | orchestrator | 2025-09-17 00:39:41.634303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:41.634314 | orchestrator | Wednesday 17 September 2025 00:39:41 +0000 (0:00:00.199) 0:00:25.761 *** 2025-09-17 00:39:41.634324 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:41.634335 | orchestrator | 2025-09-17 00:39:41.634355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:41.634366 | orchestrator | Wednesday 17 September 2025 00:39:41 +0000 (0:00:00.182) 0:00:25.944 *** 2025-09-17 00:39:41.634377 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:41.634388 | orchestrator | 2025-09-17 00:39:41.634406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:52.538241 | orchestrator | Wednesday 17 September 2025 00:39:41 +0000 (0:00:00.217) 0:00:26.161 *** 2025-09-17 00:39:52.538355 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.538370 | orchestrator | 2025-09-17 00:39:52.538382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:52.538394 | orchestrator | Wednesday 17 September 2025 00:39:41 +0000 (0:00:00.191) 0:00:26.353 *** 2025-09-17 00:39:52.538405 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e) 2025-09-17 00:39:52.538417 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e) 2025-09-17 00:39:52.538428 | orchestrator | 2025-09-17 00:39:52.538439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:52.538450 | orchestrator | Wednesday 17 September 2025 00:39:42 +0000 (0:00:00.412) 0:00:26.765 *** 2025-09-17 00:39:52.538460 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4) 2025-09-17 00:39:52.538471 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4) 2025-09-17 00:39:52.538481 | orchestrator | 2025-09-17 00:39:52.538492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:52.538503 | orchestrator | Wednesday 17 September 2025 00:39:42 +0000 (0:00:00.407) 0:00:27.172 *** 2025-09-17 00:39:52.538513 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d) 2025-09-17 00:39:52.538524 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d) 2025-09-17 00:39:52.538534 | orchestrator | 2025-09-17 00:39:52.538545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:52.538555 | orchestrator | Wednesday 17 September 2025 00:39:43 +0000 (0:00:00.412) 0:00:27.584 *** 2025-09-17 00:39:52.538565 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690) 2025-09-17 00:39:52.538576 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690) 2025-09-17 00:39:52.538587 | orchestrator | 2025-09-17 00:39:52.538597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:39:52.538608 | orchestrator | Wednesday 17 September 2025 00:39:43 +0000 (0:00:00.502) 0:00:28.087 *** 2025-09-17 00:39:52.538618 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 00:39:52.538629 | orchestrator | 2025-09-17 00:39:52.538640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.538650 | orchestrator | Wednesday 17 September 2025 00:39:43 +0000 (0:00:00.435) 0:00:28.523 *** 2025-09-17 00:39:52.538661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-17 00:39:52.538672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-17 00:39:52.538682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-17 00:39:52.538693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-17 00:39:52.538703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-17 00:39:52.538714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-17 00:39:52.538743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-17 00:39:52.538780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-17 00:39:52.538793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-17 00:39:52.538805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-17 00:39:52.538818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-17 00:39:52.538830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-17 00:39:52.538842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-17 00:39:52.538879 | orchestrator | 2025-09-17 00:39:52.538892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.538905 | orchestrator | Wednesday 17 September 2025 00:39:44 +0000 (0:00:00.762) 0:00:29.285 *** 2025-09-17 00:39:52.538917 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.538929 | orchestrator | 2025-09-17 00:39:52.538941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.538954 | orchestrator | Wednesday 17 September 2025 00:39:44 +0000 (0:00:00.238) 0:00:29.524 *** 2025-09-17 00:39:52.538966 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.538979 | orchestrator | 2025-09-17 00:39:52.538991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539004 | orchestrator | Wednesday 17 September 2025 00:39:45 +0000 (0:00:00.223) 0:00:29.747 *** 2025-09-17 00:39:52.539017 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539029 | orchestrator | 2025-09-17 00:39:52.539041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539054 | orchestrator | Wednesday 17 September 2025 00:39:45 +0000 (0:00:00.229) 0:00:29.977 *** 2025-09-17 00:39:52.539066 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539079 | orchestrator | 2025-09-17 00:39:52.539108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539121 | orchestrator | Wednesday 17 September 2025 00:39:45 +0000 (0:00:00.243) 0:00:30.221 *** 2025-09-17 00:39:52.539133 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539145 | orchestrator | 2025-09-17 00:39:52.539155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539166 | orchestrator | Wednesday 17 September 2025 00:39:45 +0000 (0:00:00.216) 0:00:30.437 *** 2025-09-17 00:39:52.539177 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539188 | orchestrator | 2025-09-17 00:39:52.539198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539209 | orchestrator | Wednesday 17 September 2025 00:39:46 +0000 (0:00:00.212) 0:00:30.649 *** 2025-09-17 00:39:52.539220 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539231 | orchestrator | 2025-09-17 00:39:52.539241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539252 | orchestrator | Wednesday 17 September 2025 00:39:46 +0000 (0:00:00.200) 0:00:30.850 *** 2025-09-17 00:39:52.539263 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539273 | orchestrator | 2025-09-17 00:39:52.539284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539295 | orchestrator | Wednesday 17 September 2025 00:39:46 +0000 (0:00:00.243) 0:00:31.093 *** 2025-09-17 00:39:52.539305 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-17 00:39:52.539316 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-17 00:39:52.539327 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-17 00:39:52.539338 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-17 00:39:52.539348 | orchestrator | 2025-09-17 00:39:52.539360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539371 | orchestrator | Wednesday 17 September 2025 00:39:47 +0000 (0:00:00.956) 0:00:32.050 *** 2025-09-17 00:39:52.539391 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539402 | orchestrator | 2025-09-17 00:39:52.539412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539423 | orchestrator | Wednesday 17 September 2025 00:39:47 +0000 (0:00:00.205) 0:00:32.256 *** 2025-09-17 00:39:52.539434 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539444 | orchestrator | 2025-09-17 00:39:52.539455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539466 | orchestrator | Wednesday 17 September 2025 00:39:47 +0000 (0:00:00.199) 0:00:32.456 *** 2025-09-17 00:39:52.539476 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539487 | orchestrator | 2025-09-17 00:39:52.539498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:39:52.539508 | orchestrator | Wednesday 17 September 2025 00:39:48 +0000 (0:00:00.719) 0:00:33.176 *** 2025-09-17 00:39:52.539519 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539530 | orchestrator | 2025-09-17 00:39:52.539540 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-17 00:39:52.539551 | orchestrator | Wednesday 17 September 2025 00:39:48 +0000 (0:00:00.228) 0:00:33.404 *** 2025-09-17 00:39:52.539567 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539578 | orchestrator | 2025-09-17 00:39:52.539589 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-17 00:39:52.539600 | orchestrator | Wednesday 17 September 2025 00:39:49 +0000 (0:00:00.141) 0:00:33.546 *** 2025-09-17 00:39:52.539611 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'}}) 2025-09-17 00:39:52.539622 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd1158166-3610-5fc1-bd8e-5288705939fa'}}) 2025-09-17 00:39:52.539632 | orchestrator | 2025-09-17 00:39:52.539643 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-17 00:39:52.539654 | orchestrator | Wednesday 17 September 2025 00:39:49 +0000 (0:00:00.200) 0:00:33.747 *** 2025-09-17 00:39:52.539666 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'}) 2025-09-17 00:39:52.539678 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'}) 2025-09-17 00:39:52.539689 | orchestrator | 2025-09-17 00:39:52.539700 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-17 00:39:52.539711 | orchestrator | Wednesday 17 September 2025 00:39:51 +0000 (0:00:01.818) 0:00:35.566 *** 2025-09-17 00:39:52.539721 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:39:52.539733 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:39:52.539744 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:52.539755 | orchestrator | 2025-09-17 00:39:52.539766 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-17 00:39:52.539777 | orchestrator | Wednesday 17 September 2025 00:39:51 +0000 (0:00:00.172) 0:00:35.738 *** 2025-09-17 00:39:52.539787 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'}) 2025-09-17 00:39:52.539798 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'}) 2025-09-17 00:39:52.539809 | orchestrator | 2025-09-17 00:39:52.539826 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-17 00:39:58.744448 | orchestrator | Wednesday 17 September 2025 00:39:52 +0000 (0:00:01.325) 0:00:37.063 *** 2025-09-17 00:39:58.744586 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:39:58.744604 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:39:58.744616 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.744627 | orchestrator | 2025-09-17 00:39:58.744639 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-17 00:39:58.744650 | orchestrator | Wednesday 17 September 2025 00:39:52 +0000 (0:00:00.169) 0:00:37.233 *** 2025-09-17 00:39:58.744660 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.744671 | orchestrator | 2025-09-17 00:39:58.744681 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-17 00:39:58.744692 | orchestrator | Wednesday 17 September 2025 00:39:52 +0000 (0:00:00.159) 0:00:37.392 *** 2025-09-17 00:39:58.744703 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:39:58.744713 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:39:58.744724 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.744734 | orchestrator | 2025-09-17 00:39:58.744745 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-17 00:39:58.744755 | orchestrator | Wednesday 17 September 2025 00:39:53 +0000 (0:00:00.156) 0:00:37.548 *** 2025-09-17 00:39:58.744766 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.744776 | orchestrator | 2025-09-17 00:39:58.744786 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-17 00:39:58.744797 | orchestrator | Wednesday 17 September 2025 00:39:53 +0000 (0:00:00.132) 0:00:37.681 *** 2025-09-17 00:39:58.744807 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:39:58.744818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:39:58.744828 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.744839 | orchestrator | 2025-09-17 00:39:58.744850 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-17 00:39:58.744896 | orchestrator | Wednesday 17 September 2025 00:39:53 +0000 (0:00:00.236) 0:00:37.917 *** 2025-09-17 00:39:58.744921 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.744932 | orchestrator | 2025-09-17 00:39:58.744943 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-17 00:39:58.744953 | orchestrator | Wednesday 17 September 2025 00:39:53 +0000 (0:00:00.363) 0:00:38.281 *** 2025-09-17 00:39:58.744964 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:39:58.744975 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:39:58.744988 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.745000 | orchestrator | 2025-09-17 00:39:58.745012 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-17 00:39:58.745025 | orchestrator | Wednesday 17 September 2025 00:39:53 +0000 (0:00:00.222) 0:00:38.503 *** 2025-09-17 00:39:58.745037 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:39:58.745049 | orchestrator | 2025-09-17 00:39:58.745062 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-17 00:39:58.745074 | orchestrator | Wednesday 17 September 2025 00:39:54 +0000 (0:00:00.191) 0:00:38.694 *** 2025-09-17 00:39:58.745094 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:39:58.745108 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:39:58.745120 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.745133 | orchestrator | 2025-09-17 00:39:58.745145 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-17 00:39:58.745157 | orchestrator | Wednesday 17 September 2025 00:39:54 +0000 (0:00:00.183) 0:00:38.878 *** 2025-09-17 00:39:58.745169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:39:58.745182 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:39:58.745194 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.745206 | orchestrator | 2025-09-17 00:39:58.745218 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-17 00:39:58.745231 | orchestrator | Wednesday 17 September 2025 00:39:54 +0000 (0:00:00.197) 0:00:39.076 *** 2025-09-17 00:39:58.745259 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:39:58.745273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:39:58.745285 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.745297 | orchestrator | 2025-09-17 00:39:58.745310 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-17 00:39:58.745322 | orchestrator | Wednesday 17 September 2025 00:39:54 +0000 (0:00:00.187) 0:00:39.264 *** 2025-09-17 00:39:58.745334 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.745345 | orchestrator | 2025-09-17 00:39:58.745356 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-17 00:39:58.745367 | orchestrator | Wednesday 17 September 2025 00:39:54 +0000 (0:00:00.173) 0:00:39.437 *** 2025-09-17 00:39:58.745377 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.745388 | orchestrator | 2025-09-17 00:39:58.745399 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-17 00:39:58.745410 | orchestrator | Wednesday 17 September 2025 00:39:55 +0000 (0:00:00.191) 0:00:39.628 *** 2025-09-17 00:39:58.745420 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.745431 | orchestrator | 2025-09-17 00:39:58.745442 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-17 00:39:58.745453 | orchestrator | Wednesday 17 September 2025 00:39:55 +0000 (0:00:00.186) 0:00:39.815 *** 2025-09-17 00:39:58.745463 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 00:39:58.745474 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-17 00:39:58.745485 | orchestrator | } 2025-09-17 00:39:58.745496 | orchestrator | 2025-09-17 00:39:58.745507 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-17 00:39:58.745518 | orchestrator | Wednesday 17 September 2025 00:39:55 +0000 (0:00:00.158) 0:00:39.974 *** 2025-09-17 00:39:58.745529 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 00:39:58.745539 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-17 00:39:58.745550 | orchestrator | } 2025-09-17 00:39:58.745561 | orchestrator | 2025-09-17 00:39:58.745572 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-17 00:39:58.745582 | orchestrator | Wednesday 17 September 2025 00:39:55 +0000 (0:00:00.176) 0:00:40.150 *** 2025-09-17 00:39:58.745593 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 00:39:58.745604 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-17 00:39:58.745622 | orchestrator | } 2025-09-17 00:39:58.745633 | orchestrator | 2025-09-17 00:39:58.745643 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-17 00:39:58.745654 | orchestrator | Wednesday 17 September 2025 00:39:55 +0000 (0:00:00.163) 0:00:40.314 *** 2025-09-17 00:39:58.745664 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:39:58.745675 | orchestrator | 2025-09-17 00:39:58.745686 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-17 00:39:58.745697 | orchestrator | Wednesday 17 September 2025 00:39:56 +0000 (0:00:00.766) 0:00:41.080 *** 2025-09-17 00:39:58.745708 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:39:58.745718 | orchestrator | 2025-09-17 00:39:58.745729 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-17 00:39:58.745740 | orchestrator | Wednesday 17 September 2025 00:39:57 +0000 (0:00:00.543) 0:00:41.624 *** 2025-09-17 00:39:58.745750 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:39:58.745761 | orchestrator | 2025-09-17 00:39:58.745772 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-17 00:39:58.745783 | orchestrator | Wednesday 17 September 2025 00:39:57 +0000 (0:00:00.604) 0:00:42.228 *** 2025-09-17 00:39:58.745794 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:39:58.745804 | orchestrator | 2025-09-17 00:39:58.745815 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-17 00:39:58.745826 | orchestrator | Wednesday 17 September 2025 00:39:57 +0000 (0:00:00.144) 0:00:42.373 *** 2025-09-17 00:39:58.745836 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.745847 | orchestrator | 2025-09-17 00:39:58.745878 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-17 00:39:58.745889 | orchestrator | Wednesday 17 September 2025 00:39:57 +0000 (0:00:00.120) 0:00:42.493 *** 2025-09-17 00:39:58.745907 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.745918 | orchestrator | 2025-09-17 00:39:58.745928 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-17 00:39:58.745939 | orchestrator | Wednesday 17 September 2025 00:39:58 +0000 (0:00:00.107) 0:00:42.601 *** 2025-09-17 00:39:58.745950 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 00:39:58.745960 | orchestrator |  "vgs_report": { 2025-09-17 00:39:58.745971 | orchestrator |  "vg": [] 2025-09-17 00:39:58.745982 | orchestrator |  } 2025-09-17 00:39:58.745992 | orchestrator | } 2025-09-17 00:39:58.746003 | orchestrator | 2025-09-17 00:39:58.746065 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-17 00:39:58.746078 | orchestrator | Wednesday 17 September 2025 00:39:58 +0000 (0:00:00.146) 0:00:42.747 *** 2025-09-17 00:39:58.746088 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.746099 | orchestrator | 2025-09-17 00:39:58.746109 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-17 00:39:58.746120 | orchestrator | Wednesday 17 September 2025 00:39:58 +0000 (0:00:00.131) 0:00:42.878 *** 2025-09-17 00:39:58.746140 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.746150 | orchestrator | 2025-09-17 00:39:58.746161 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-17 00:39:58.746172 | orchestrator | Wednesday 17 September 2025 00:39:58 +0000 (0:00:00.129) 0:00:43.008 *** 2025-09-17 00:39:58.746183 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.746193 | orchestrator | 2025-09-17 00:39:58.746204 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-17 00:39:58.746215 | orchestrator | Wednesday 17 September 2025 00:39:58 +0000 (0:00:00.129) 0:00:43.137 *** 2025-09-17 00:39:58.746225 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:39:58.746236 | orchestrator | 2025-09-17 00:39:58.746247 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-17 00:39:58.746265 | orchestrator | Wednesday 17 September 2025 00:39:58 +0000 (0:00:00.133) 0:00:43.271 *** 2025-09-17 00:40:03.131311 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131428 | orchestrator | 2025-09-17 00:40:03.131467 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-17 00:40:03.131481 | orchestrator | Wednesday 17 September 2025 00:39:58 +0000 (0:00:00.140) 0:00:43.412 *** 2025-09-17 00:40:03.131492 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131503 | orchestrator | 2025-09-17 00:40:03.131514 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-17 00:40:03.131525 | orchestrator | Wednesday 17 September 2025 00:39:59 +0000 (0:00:00.323) 0:00:43.736 *** 2025-09-17 00:40:03.131535 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131546 | orchestrator | 2025-09-17 00:40:03.131557 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-17 00:40:03.131567 | orchestrator | Wednesday 17 September 2025 00:39:59 +0000 (0:00:00.140) 0:00:43.877 *** 2025-09-17 00:40:03.131578 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131589 | orchestrator | 2025-09-17 00:40:03.131599 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-17 00:40:03.131610 | orchestrator | Wednesday 17 September 2025 00:39:59 +0000 (0:00:00.108) 0:00:43.986 *** 2025-09-17 00:40:03.131621 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131631 | orchestrator | 2025-09-17 00:40:03.131642 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-17 00:40:03.131653 | orchestrator | Wednesday 17 September 2025 00:39:59 +0000 (0:00:00.121) 0:00:44.107 *** 2025-09-17 00:40:03.131663 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131673 | orchestrator | 2025-09-17 00:40:03.131684 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-17 00:40:03.131695 | orchestrator | Wednesday 17 September 2025 00:39:59 +0000 (0:00:00.116) 0:00:44.224 *** 2025-09-17 00:40:03.131705 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131715 | orchestrator | 2025-09-17 00:40:03.131726 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-17 00:40:03.131737 | orchestrator | Wednesday 17 September 2025 00:39:59 +0000 (0:00:00.132) 0:00:44.357 *** 2025-09-17 00:40:03.131747 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131758 | orchestrator | 2025-09-17 00:40:03.131768 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-17 00:40:03.131779 | orchestrator | Wednesday 17 September 2025 00:39:59 +0000 (0:00:00.118) 0:00:44.475 *** 2025-09-17 00:40:03.131789 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131800 | orchestrator | 2025-09-17 00:40:03.131810 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-17 00:40:03.131821 | orchestrator | Wednesday 17 September 2025 00:40:00 +0000 (0:00:00.124) 0:00:44.600 *** 2025-09-17 00:40:03.131831 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131843 | orchestrator | 2025-09-17 00:40:03.131897 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-17 00:40:03.131911 | orchestrator | Wednesday 17 September 2025 00:40:00 +0000 (0:00:00.122) 0:00:44.722 *** 2025-09-17 00:40:03.131940 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:03.131955 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:03.131967 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.131981 | orchestrator | 2025-09-17 00:40:03.131993 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-17 00:40:03.132006 | orchestrator | Wednesday 17 September 2025 00:40:00 +0000 (0:00:00.145) 0:00:44.867 *** 2025-09-17 00:40:03.132018 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:03.132032 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:03.132052 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.132065 | orchestrator | 2025-09-17 00:40:03.132077 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-17 00:40:03.132090 | orchestrator | Wednesday 17 September 2025 00:40:00 +0000 (0:00:00.151) 0:00:45.018 *** 2025-09-17 00:40:03.132102 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:03.132114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:03.132127 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.132139 | orchestrator | 2025-09-17 00:40:03.132152 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-17 00:40:03.132165 | orchestrator | Wednesday 17 September 2025 00:40:00 +0000 (0:00:00.153) 0:00:45.172 *** 2025-09-17 00:40:03.132178 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:03.132191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:03.132203 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.132214 | orchestrator | 2025-09-17 00:40:03.132225 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-17 00:40:03.132251 | orchestrator | Wednesday 17 September 2025 00:40:00 +0000 (0:00:00.270) 0:00:45.443 *** 2025-09-17 00:40:03.132263 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:03.132274 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:03.132285 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.132296 | orchestrator | 2025-09-17 00:40:03.132306 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-17 00:40:03.132317 | orchestrator | Wednesday 17 September 2025 00:40:01 +0000 (0:00:00.144) 0:00:45.587 *** 2025-09-17 00:40:03.132328 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:03.132338 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:03.132349 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.132361 | orchestrator | 2025-09-17 00:40:03.132371 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-17 00:40:03.132382 | orchestrator | Wednesday 17 September 2025 00:40:01 +0000 (0:00:00.138) 0:00:45.725 *** 2025-09-17 00:40:03.132393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:03.132404 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:03.132414 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.132425 | orchestrator | 2025-09-17 00:40:03.132435 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-17 00:40:03.132446 | orchestrator | Wednesday 17 September 2025 00:40:01 +0000 (0:00:00.142) 0:00:45.868 *** 2025-09-17 00:40:03.132457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:03.132474 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:03.132485 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.132496 | orchestrator | 2025-09-17 00:40:03.132511 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-17 00:40:03.132522 | orchestrator | Wednesday 17 September 2025 00:40:01 +0000 (0:00:00.156) 0:00:46.024 *** 2025-09-17 00:40:03.132533 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:40:03.132544 | orchestrator | 2025-09-17 00:40:03.132554 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-17 00:40:03.132565 | orchestrator | Wednesday 17 September 2025 00:40:01 +0000 (0:00:00.512) 0:00:46.536 *** 2025-09-17 00:40:03.132575 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:40:03.132586 | orchestrator | 2025-09-17 00:40:03.132597 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-17 00:40:03.132607 | orchestrator | Wednesday 17 September 2025 00:40:02 +0000 (0:00:00.554) 0:00:47.091 *** 2025-09-17 00:40:03.132618 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:40:03.132628 | orchestrator | 2025-09-17 00:40:03.132639 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-17 00:40:03.132649 | orchestrator | Wednesday 17 September 2025 00:40:02 +0000 (0:00:00.136) 0:00:47.227 *** 2025-09-17 00:40:03.132660 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'vg_name': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'}) 2025-09-17 00:40:03.132672 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'vg_name': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'}) 2025-09-17 00:40:03.132683 | orchestrator | 2025-09-17 00:40:03.132694 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-17 00:40:03.132704 | orchestrator | Wednesday 17 September 2025 00:40:02 +0000 (0:00:00.148) 0:00:47.376 *** 2025-09-17 00:40:03.132715 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:03.132726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:03.132736 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:03.132747 | orchestrator | 2025-09-17 00:40:03.132758 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-17 00:40:03.132768 | orchestrator | Wednesday 17 September 2025 00:40:02 +0000 (0:00:00.143) 0:00:47.519 *** 2025-09-17 00:40:03.132779 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:03.132790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:03.132807 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:08.993936 | orchestrator | 2025-09-17 00:40:08.994065 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-17 00:40:08.994083 | orchestrator | Wednesday 17 September 2025 00:40:03 +0000 (0:00:00.139) 0:00:47.659 *** 2025-09-17 00:40:08.994095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'})  2025-09-17 00:40:08.994108 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'})  2025-09-17 00:40:08.994119 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:08.994130 | orchestrator | 2025-09-17 00:40:08.994141 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-17 00:40:08.994152 | orchestrator | Wednesday 17 September 2025 00:40:03 +0000 (0:00:00.143) 0:00:47.803 *** 2025-09-17 00:40:08.994185 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 00:40:08.994197 | orchestrator |  "lvm_report": { 2025-09-17 00:40:08.994209 | orchestrator |  "lv": [ 2025-09-17 00:40:08.994220 | orchestrator |  { 2025-09-17 00:40:08.994231 | orchestrator |  "lv_name": "osd-block-d1158166-3610-5fc1-bd8e-5288705939fa", 2025-09-17 00:40:08.994242 | orchestrator |  "vg_name": "ceph-d1158166-3610-5fc1-bd8e-5288705939fa" 2025-09-17 00:40:08.994253 | orchestrator |  }, 2025-09-17 00:40:08.994264 | orchestrator |  { 2025-09-17 00:40:08.994274 | orchestrator |  "lv_name": "osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d", 2025-09-17 00:40:08.994285 | orchestrator |  "vg_name": "ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d" 2025-09-17 00:40:08.994296 | orchestrator |  } 2025-09-17 00:40:08.994306 | orchestrator |  ], 2025-09-17 00:40:08.994317 | orchestrator |  "pv": [ 2025-09-17 00:40:08.994328 | orchestrator |  { 2025-09-17 00:40:08.994339 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-17 00:40:08.994350 | orchestrator |  "vg_name": "ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d" 2025-09-17 00:40:08.994360 | orchestrator |  }, 2025-09-17 00:40:08.994371 | orchestrator |  { 2025-09-17 00:40:08.994382 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-17 00:40:08.994393 | orchestrator |  "vg_name": "ceph-d1158166-3610-5fc1-bd8e-5288705939fa" 2025-09-17 00:40:08.994403 | orchestrator |  } 2025-09-17 00:40:08.994414 | orchestrator |  ] 2025-09-17 00:40:08.994425 | orchestrator |  } 2025-09-17 00:40:08.994435 | orchestrator | } 2025-09-17 00:40:08.994447 | orchestrator | 2025-09-17 00:40:08.994458 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-17 00:40:08.994471 | orchestrator | 2025-09-17 00:40:08.994482 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 00:40:08.994496 | orchestrator | Wednesday 17 September 2025 00:40:03 +0000 (0:00:00.371) 0:00:48.175 *** 2025-09-17 00:40:08.994509 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-17 00:40:08.994521 | orchestrator | 2025-09-17 00:40:08.994535 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 00:40:08.994554 | orchestrator | Wednesday 17 September 2025 00:40:03 +0000 (0:00:00.225) 0:00:48.401 *** 2025-09-17 00:40:08.994573 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:08.994591 | orchestrator | 2025-09-17 00:40:08.994611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.994632 | orchestrator | Wednesday 17 September 2025 00:40:04 +0000 (0:00:00.200) 0:00:48.601 *** 2025-09-17 00:40:08.994652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-17 00:40:08.994665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-17 00:40:08.994678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-17 00:40:08.994691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-17 00:40:08.994704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-17 00:40:08.994715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-17 00:40:08.994726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-17 00:40:08.994736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-17 00:40:08.994747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-17 00:40:08.994758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-17 00:40:08.994768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-17 00:40:08.994787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-17 00:40:08.994798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-17 00:40:08.994808 | orchestrator | 2025-09-17 00:40:08.994819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.994830 | orchestrator | Wednesday 17 September 2025 00:40:04 +0000 (0:00:00.373) 0:00:48.974 *** 2025-09-17 00:40:08.994841 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:08.994855 | orchestrator | 2025-09-17 00:40:08.994887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.994899 | orchestrator | Wednesday 17 September 2025 00:40:04 +0000 (0:00:00.188) 0:00:49.163 *** 2025-09-17 00:40:08.994910 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:08.994920 | orchestrator | 2025-09-17 00:40:08.994931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.994958 | orchestrator | Wednesday 17 September 2025 00:40:04 +0000 (0:00:00.176) 0:00:49.339 *** 2025-09-17 00:40:08.994970 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:08.994981 | orchestrator | 2025-09-17 00:40:08.994992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.995002 | orchestrator | Wednesday 17 September 2025 00:40:04 +0000 (0:00:00.182) 0:00:49.522 *** 2025-09-17 00:40:08.995013 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:08.995024 | orchestrator | 2025-09-17 00:40:08.995035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.995046 | orchestrator | Wednesday 17 September 2025 00:40:05 +0000 (0:00:00.175) 0:00:49.697 *** 2025-09-17 00:40:08.995056 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:08.995067 | orchestrator | 2025-09-17 00:40:08.995117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.995130 | orchestrator | Wednesday 17 September 2025 00:40:05 +0000 (0:00:00.217) 0:00:49.915 *** 2025-09-17 00:40:08.995140 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:08.995151 | orchestrator | 2025-09-17 00:40:08.995162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.995173 | orchestrator | Wednesday 17 September 2025 00:40:06 +0000 (0:00:00.625) 0:00:50.540 *** 2025-09-17 00:40:08.995183 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:08.995194 | orchestrator | 2025-09-17 00:40:08.995205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.995216 | orchestrator | Wednesday 17 September 2025 00:40:06 +0000 (0:00:00.228) 0:00:50.769 *** 2025-09-17 00:40:08.995226 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:08.995237 | orchestrator | 2025-09-17 00:40:08.995248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.995259 | orchestrator | Wednesday 17 September 2025 00:40:06 +0000 (0:00:00.204) 0:00:50.973 *** 2025-09-17 00:40:08.995269 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571) 2025-09-17 00:40:08.995281 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571) 2025-09-17 00:40:08.995292 | orchestrator | 2025-09-17 00:40:08.995303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.995314 | orchestrator | Wednesday 17 September 2025 00:40:06 +0000 (0:00:00.438) 0:00:51.412 *** 2025-09-17 00:40:08.995325 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9) 2025-09-17 00:40:08.995335 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9) 2025-09-17 00:40:08.995346 | orchestrator | 2025-09-17 00:40:08.995357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.995368 | orchestrator | Wednesday 17 September 2025 00:40:07 +0000 (0:00:00.422) 0:00:51.834 *** 2025-09-17 00:40:08.995390 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e) 2025-09-17 00:40:08.995402 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e) 2025-09-17 00:40:08.995412 | orchestrator | 2025-09-17 00:40:08.995423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.995434 | orchestrator | Wednesday 17 September 2025 00:40:07 +0000 (0:00:00.462) 0:00:52.297 *** 2025-09-17 00:40:08.995445 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7) 2025-09-17 00:40:08.995455 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7) 2025-09-17 00:40:08.995466 | orchestrator | 2025-09-17 00:40:08.995477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 00:40:08.995488 | orchestrator | Wednesday 17 September 2025 00:40:08 +0000 (0:00:00.426) 0:00:52.723 *** 2025-09-17 00:40:08.995499 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 00:40:08.995509 | orchestrator | 2025-09-17 00:40:08.995520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:08.995531 | orchestrator | Wednesday 17 September 2025 00:40:08 +0000 (0:00:00.371) 0:00:53.095 *** 2025-09-17 00:40:08.995541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-17 00:40:08.995552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-17 00:40:08.995563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-17 00:40:08.995574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-17 00:40:08.995584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-17 00:40:08.995595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-17 00:40:08.995606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-17 00:40:08.995616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-17 00:40:08.995627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-17 00:40:08.995638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-17 00:40:08.995649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-17 00:40:08.995666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-17 00:40:17.708547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-17 00:40:17.708657 | orchestrator | 2025-09-17 00:40:17.708672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.708685 | orchestrator | Wednesday 17 September 2025 00:40:08 +0000 (0:00:00.417) 0:00:53.512 *** 2025-09-17 00:40:17.708696 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.708708 | orchestrator | 2025-09-17 00:40:17.708719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.708729 | orchestrator | Wednesday 17 September 2025 00:40:09 +0000 (0:00:00.197) 0:00:53.710 *** 2025-09-17 00:40:17.708740 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.708751 | orchestrator | 2025-09-17 00:40:17.708761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.708772 | orchestrator | Wednesday 17 September 2025 00:40:09 +0000 (0:00:00.195) 0:00:53.906 *** 2025-09-17 00:40:17.708782 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.708793 | orchestrator | 2025-09-17 00:40:17.708803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.708838 | orchestrator | Wednesday 17 September 2025 00:40:09 +0000 (0:00:00.607) 0:00:54.513 *** 2025-09-17 00:40:17.708850 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.708898 | orchestrator | 2025-09-17 00:40:17.708910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.708921 | orchestrator | Wednesday 17 September 2025 00:40:10 +0000 (0:00:00.219) 0:00:54.733 *** 2025-09-17 00:40:17.708931 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.708941 | orchestrator | 2025-09-17 00:40:17.708952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.708962 | orchestrator | Wednesday 17 September 2025 00:40:10 +0000 (0:00:00.193) 0:00:54.926 *** 2025-09-17 00:40:17.708973 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.708983 | orchestrator | 2025-09-17 00:40:17.708993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.709004 | orchestrator | Wednesday 17 September 2025 00:40:10 +0000 (0:00:00.196) 0:00:55.123 *** 2025-09-17 00:40:17.709014 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709025 | orchestrator | 2025-09-17 00:40:17.709035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.709046 | orchestrator | Wednesday 17 September 2025 00:40:10 +0000 (0:00:00.182) 0:00:55.306 *** 2025-09-17 00:40:17.709056 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709067 | orchestrator | 2025-09-17 00:40:17.709078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.709091 | orchestrator | Wednesday 17 September 2025 00:40:10 +0000 (0:00:00.178) 0:00:55.484 *** 2025-09-17 00:40:17.709103 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-17 00:40:17.709116 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-17 00:40:17.709144 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-17 00:40:17.709157 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-17 00:40:17.709169 | orchestrator | 2025-09-17 00:40:17.709181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.709192 | orchestrator | Wednesday 17 September 2025 00:40:11 +0000 (0:00:00.603) 0:00:56.088 *** 2025-09-17 00:40:17.709205 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709217 | orchestrator | 2025-09-17 00:40:17.709229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.709240 | orchestrator | Wednesday 17 September 2025 00:40:11 +0000 (0:00:00.214) 0:00:56.303 *** 2025-09-17 00:40:17.709253 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709265 | orchestrator | 2025-09-17 00:40:17.709278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.709290 | orchestrator | Wednesday 17 September 2025 00:40:11 +0000 (0:00:00.178) 0:00:56.481 *** 2025-09-17 00:40:17.709301 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709314 | orchestrator | 2025-09-17 00:40:17.709325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 00:40:17.709338 | orchestrator | Wednesday 17 September 2025 00:40:12 +0000 (0:00:00.186) 0:00:56.668 *** 2025-09-17 00:40:17.709349 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709361 | orchestrator | 2025-09-17 00:40:17.709373 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-17 00:40:17.709385 | orchestrator | Wednesday 17 September 2025 00:40:12 +0000 (0:00:00.185) 0:00:56.853 *** 2025-09-17 00:40:17.709397 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709409 | orchestrator | 2025-09-17 00:40:17.709421 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-17 00:40:17.709433 | orchestrator | Wednesday 17 September 2025 00:40:12 +0000 (0:00:00.255) 0:00:57.109 *** 2025-09-17 00:40:17.709444 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2dc6576b-ad92-58b3-afc8-22b8ce20905e'}}) 2025-09-17 00:40:17.709455 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a7b5a8de-6218-5c80-971a-bac3422a4161'}}) 2025-09-17 00:40:17.709473 | orchestrator | 2025-09-17 00:40:17.709484 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-17 00:40:17.709494 | orchestrator | Wednesday 17 September 2025 00:40:12 +0000 (0:00:00.177) 0:00:57.287 *** 2025-09-17 00:40:17.709506 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'}) 2025-09-17 00:40:17.709519 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'}) 2025-09-17 00:40:17.709529 | orchestrator | 2025-09-17 00:40:17.709540 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-17 00:40:17.709567 | orchestrator | Wednesday 17 September 2025 00:40:14 +0000 (0:00:01.816) 0:00:59.103 *** 2025-09-17 00:40:17.709579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:17.709591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:17.709601 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709612 | orchestrator | 2025-09-17 00:40:17.709623 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-17 00:40:17.709633 | orchestrator | Wednesday 17 September 2025 00:40:14 +0000 (0:00:00.144) 0:00:59.248 *** 2025-09-17 00:40:17.709644 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'}) 2025-09-17 00:40:17.709655 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'}) 2025-09-17 00:40:17.709666 | orchestrator | 2025-09-17 00:40:17.709677 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-17 00:40:17.709687 | orchestrator | Wednesday 17 September 2025 00:40:16 +0000 (0:00:01.394) 0:01:00.642 *** 2025-09-17 00:40:17.709698 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:17.709709 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:17.709720 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709730 | orchestrator | 2025-09-17 00:40:17.709741 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-17 00:40:17.709751 | orchestrator | Wednesday 17 September 2025 00:40:16 +0000 (0:00:00.159) 0:01:00.801 *** 2025-09-17 00:40:17.709762 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709772 | orchestrator | 2025-09-17 00:40:17.709783 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-17 00:40:17.709793 | orchestrator | Wednesday 17 September 2025 00:40:16 +0000 (0:00:00.145) 0:01:00.946 *** 2025-09-17 00:40:17.709804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:17.709820 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:17.709831 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709841 | orchestrator | 2025-09-17 00:40:17.709852 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-17 00:40:17.709880 | orchestrator | Wednesday 17 September 2025 00:40:16 +0000 (0:00:00.161) 0:01:01.108 *** 2025-09-17 00:40:17.709891 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709909 | orchestrator | 2025-09-17 00:40:17.709919 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-17 00:40:17.709930 | orchestrator | Wednesday 17 September 2025 00:40:16 +0000 (0:00:00.139) 0:01:01.248 *** 2025-09-17 00:40:17.709941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:17.709952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:17.709962 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.709972 | orchestrator | 2025-09-17 00:40:17.709983 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-17 00:40:17.709994 | orchestrator | Wednesday 17 September 2025 00:40:16 +0000 (0:00:00.173) 0:01:01.421 *** 2025-09-17 00:40:17.710004 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.710068 | orchestrator | 2025-09-17 00:40:17.710081 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-17 00:40:17.710092 | orchestrator | Wednesday 17 September 2025 00:40:17 +0000 (0:00:00.144) 0:01:01.565 *** 2025-09-17 00:40:17.710103 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:17.710113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:17.710124 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:17.710135 | orchestrator | 2025-09-17 00:40:17.710145 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-17 00:40:17.710156 | orchestrator | Wednesday 17 September 2025 00:40:17 +0000 (0:00:00.165) 0:01:01.730 *** 2025-09-17 00:40:17.710167 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:17.710177 | orchestrator | 2025-09-17 00:40:17.710188 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-17 00:40:17.710198 | orchestrator | Wednesday 17 September 2025 00:40:17 +0000 (0:00:00.343) 0:01:02.074 *** 2025-09-17 00:40:17.710216 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:24.085546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:24.085684 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.085704 | orchestrator | 2025-09-17 00:40:24.085717 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-17 00:40:24.085730 | orchestrator | Wednesday 17 September 2025 00:40:17 +0000 (0:00:00.165) 0:01:02.239 *** 2025-09-17 00:40:24.085741 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:24.085752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:24.085763 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.085774 | orchestrator | 2025-09-17 00:40:24.085785 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-17 00:40:24.085796 | orchestrator | Wednesday 17 September 2025 00:40:17 +0000 (0:00:00.153) 0:01:02.393 *** 2025-09-17 00:40:24.085807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:24.085817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:24.085828 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.085917 | orchestrator | 2025-09-17 00:40:24.085932 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-17 00:40:24.085942 | orchestrator | Wednesday 17 September 2025 00:40:18 +0000 (0:00:00.161) 0:01:02.555 *** 2025-09-17 00:40:24.085953 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.085963 | orchestrator | 2025-09-17 00:40:24.085974 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-17 00:40:24.085984 | orchestrator | Wednesday 17 September 2025 00:40:18 +0000 (0:00:00.145) 0:01:02.701 *** 2025-09-17 00:40:24.085995 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.086005 | orchestrator | 2025-09-17 00:40:24.086068 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-17 00:40:24.086083 | orchestrator | Wednesday 17 September 2025 00:40:18 +0000 (0:00:00.128) 0:01:02.830 *** 2025-09-17 00:40:24.086095 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.086108 | orchestrator | 2025-09-17 00:40:24.086119 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-17 00:40:24.086132 | orchestrator | Wednesday 17 September 2025 00:40:18 +0000 (0:00:00.138) 0:01:02.968 *** 2025-09-17 00:40:24.086144 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 00:40:24.086157 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-17 00:40:24.086170 | orchestrator | } 2025-09-17 00:40:24.086182 | orchestrator | 2025-09-17 00:40:24.086195 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-17 00:40:24.086207 | orchestrator | Wednesday 17 September 2025 00:40:18 +0000 (0:00:00.139) 0:01:03.108 *** 2025-09-17 00:40:24.086219 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 00:40:24.086232 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-17 00:40:24.086244 | orchestrator | } 2025-09-17 00:40:24.086257 | orchestrator | 2025-09-17 00:40:24.086269 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-17 00:40:24.086283 | orchestrator | Wednesday 17 September 2025 00:40:18 +0000 (0:00:00.156) 0:01:03.264 *** 2025-09-17 00:40:24.086295 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 00:40:24.086308 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-17 00:40:24.086321 | orchestrator | } 2025-09-17 00:40:24.086333 | orchestrator | 2025-09-17 00:40:24.086345 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-17 00:40:24.086357 | orchestrator | Wednesday 17 September 2025 00:40:18 +0000 (0:00:00.154) 0:01:03.419 *** 2025-09-17 00:40:24.086370 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:24.086382 | orchestrator | 2025-09-17 00:40:24.086394 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-17 00:40:24.086407 | orchestrator | Wednesday 17 September 2025 00:40:19 +0000 (0:00:00.533) 0:01:03.952 *** 2025-09-17 00:40:24.086419 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:24.086430 | orchestrator | 2025-09-17 00:40:24.086441 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-17 00:40:24.086451 | orchestrator | Wednesday 17 September 2025 00:40:19 +0000 (0:00:00.562) 0:01:04.515 *** 2025-09-17 00:40:24.086462 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:24.086473 | orchestrator | 2025-09-17 00:40:24.086483 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-17 00:40:24.086494 | orchestrator | Wednesday 17 September 2025 00:40:20 +0000 (0:00:00.738) 0:01:05.253 *** 2025-09-17 00:40:24.086504 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:24.086515 | orchestrator | 2025-09-17 00:40:24.086526 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-17 00:40:24.086536 | orchestrator | Wednesday 17 September 2025 00:40:20 +0000 (0:00:00.135) 0:01:05.388 *** 2025-09-17 00:40:24.086547 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.086558 | orchestrator | 2025-09-17 00:40:24.086569 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-17 00:40:24.086579 | orchestrator | Wednesday 17 September 2025 00:40:20 +0000 (0:00:00.113) 0:01:05.502 *** 2025-09-17 00:40:24.086599 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.086610 | orchestrator | 2025-09-17 00:40:24.086621 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-17 00:40:24.086631 | orchestrator | Wednesday 17 September 2025 00:40:21 +0000 (0:00:00.122) 0:01:05.624 *** 2025-09-17 00:40:24.086642 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 00:40:24.086672 | orchestrator |  "vgs_report": { 2025-09-17 00:40:24.086684 | orchestrator |  "vg": [] 2025-09-17 00:40:24.086713 | orchestrator |  } 2025-09-17 00:40:24.086725 | orchestrator | } 2025-09-17 00:40:24.086736 | orchestrator | 2025-09-17 00:40:24.086747 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-17 00:40:24.086758 | orchestrator | Wednesday 17 September 2025 00:40:21 +0000 (0:00:00.145) 0:01:05.770 *** 2025-09-17 00:40:24.086768 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.086779 | orchestrator | 2025-09-17 00:40:24.086790 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-17 00:40:24.086800 | orchestrator | Wednesday 17 September 2025 00:40:21 +0000 (0:00:00.164) 0:01:05.934 *** 2025-09-17 00:40:24.086811 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.086821 | orchestrator | 2025-09-17 00:40:24.086832 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-17 00:40:24.086843 | orchestrator | Wednesday 17 September 2025 00:40:21 +0000 (0:00:00.141) 0:01:06.076 *** 2025-09-17 00:40:24.086853 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.086906 | orchestrator | 2025-09-17 00:40:24.086917 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-17 00:40:24.086928 | orchestrator | Wednesday 17 September 2025 00:40:21 +0000 (0:00:00.153) 0:01:06.230 *** 2025-09-17 00:40:24.086939 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.086949 | orchestrator | 2025-09-17 00:40:24.086960 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-17 00:40:24.086974 | orchestrator | Wednesday 17 September 2025 00:40:21 +0000 (0:00:00.148) 0:01:06.378 *** 2025-09-17 00:40:24.086993 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087010 | orchestrator | 2025-09-17 00:40:24.087027 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-17 00:40:24.087044 | orchestrator | Wednesday 17 September 2025 00:40:21 +0000 (0:00:00.152) 0:01:06.531 *** 2025-09-17 00:40:24.087061 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087078 | orchestrator | 2025-09-17 00:40:24.087095 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-17 00:40:24.087114 | orchestrator | Wednesday 17 September 2025 00:40:22 +0000 (0:00:00.164) 0:01:06.696 *** 2025-09-17 00:40:24.087131 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087148 | orchestrator | 2025-09-17 00:40:24.087166 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-17 00:40:24.087184 | orchestrator | Wednesday 17 September 2025 00:40:22 +0000 (0:00:00.149) 0:01:06.846 *** 2025-09-17 00:40:24.087204 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087222 | orchestrator | 2025-09-17 00:40:24.087240 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-17 00:40:24.087251 | orchestrator | Wednesday 17 September 2025 00:40:22 +0000 (0:00:00.141) 0:01:06.987 *** 2025-09-17 00:40:24.087261 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087272 | orchestrator | 2025-09-17 00:40:24.087283 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-17 00:40:24.087301 | orchestrator | Wednesday 17 September 2025 00:40:22 +0000 (0:00:00.339) 0:01:07.326 *** 2025-09-17 00:40:24.087312 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087322 | orchestrator | 2025-09-17 00:40:24.087332 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-17 00:40:24.087343 | orchestrator | Wednesday 17 September 2025 00:40:22 +0000 (0:00:00.154) 0:01:07.480 *** 2025-09-17 00:40:24.087354 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087374 | orchestrator | 2025-09-17 00:40:24.087384 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-17 00:40:24.087395 | orchestrator | Wednesday 17 September 2025 00:40:23 +0000 (0:00:00.169) 0:01:07.650 *** 2025-09-17 00:40:24.087406 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087417 | orchestrator | 2025-09-17 00:40:24.087427 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-17 00:40:24.087438 | orchestrator | Wednesday 17 September 2025 00:40:23 +0000 (0:00:00.151) 0:01:07.802 *** 2025-09-17 00:40:24.087449 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087459 | orchestrator | 2025-09-17 00:40:24.087470 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-17 00:40:24.087481 | orchestrator | Wednesday 17 September 2025 00:40:23 +0000 (0:00:00.142) 0:01:07.944 *** 2025-09-17 00:40:24.087491 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087502 | orchestrator | 2025-09-17 00:40:24.087513 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-17 00:40:24.087523 | orchestrator | Wednesday 17 September 2025 00:40:23 +0000 (0:00:00.139) 0:01:08.083 *** 2025-09-17 00:40:24.087535 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:24.087546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:24.087557 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087568 | orchestrator | 2025-09-17 00:40:24.087578 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-17 00:40:24.087589 | orchestrator | Wednesday 17 September 2025 00:40:23 +0000 (0:00:00.189) 0:01:08.273 *** 2025-09-17 00:40:24.087600 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:24.087611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:24.087621 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:24.087632 | orchestrator | 2025-09-17 00:40:24.087642 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-17 00:40:24.087653 | orchestrator | Wednesday 17 September 2025 00:40:23 +0000 (0:00:00.172) 0:01:08.445 *** 2025-09-17 00:40:24.087673 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:27.278709 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:27.278797 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:27.278807 | orchestrator | 2025-09-17 00:40:27.278816 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-17 00:40:27.278825 | orchestrator | Wednesday 17 September 2025 00:40:24 +0000 (0:00:00.171) 0:01:08.617 *** 2025-09-17 00:40:27.278833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:27.278841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:27.278848 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:27.278855 | orchestrator | 2025-09-17 00:40:27.278901 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-17 00:40:27.278909 | orchestrator | Wednesday 17 September 2025 00:40:24 +0000 (0:00:00.158) 0:01:08.775 *** 2025-09-17 00:40:27.278916 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:27.278944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:27.278951 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:27.278958 | orchestrator | 2025-09-17 00:40:27.278966 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-17 00:40:27.278973 | orchestrator | Wednesday 17 September 2025 00:40:24 +0000 (0:00:00.180) 0:01:08.956 *** 2025-09-17 00:40:27.278980 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:27.278987 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:27.278994 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:27.279001 | orchestrator | 2025-09-17 00:40:27.279021 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-17 00:40:27.279028 | orchestrator | Wednesday 17 September 2025 00:40:24 +0000 (0:00:00.147) 0:01:09.103 *** 2025-09-17 00:40:27.279035 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:27.279042 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:27.279050 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:27.279057 | orchestrator | 2025-09-17 00:40:27.279064 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-17 00:40:27.279072 | orchestrator | Wednesday 17 September 2025 00:40:25 +0000 (0:00:00.456) 0:01:09.560 *** 2025-09-17 00:40:27.279079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:27.279086 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:27.279094 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:27.279101 | orchestrator | 2025-09-17 00:40:27.279108 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-17 00:40:27.279115 | orchestrator | Wednesday 17 September 2025 00:40:25 +0000 (0:00:00.212) 0:01:09.773 *** 2025-09-17 00:40:27.279122 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:27.279130 | orchestrator | 2025-09-17 00:40:27.279137 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-17 00:40:27.279144 | orchestrator | Wednesday 17 September 2025 00:40:25 +0000 (0:00:00.537) 0:01:10.310 *** 2025-09-17 00:40:27.279151 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:27.279158 | orchestrator | 2025-09-17 00:40:27.279165 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-17 00:40:27.279172 | orchestrator | Wednesday 17 September 2025 00:40:26 +0000 (0:00:00.532) 0:01:10.843 *** 2025-09-17 00:40:27.279179 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:27.279186 | orchestrator | 2025-09-17 00:40:27.279193 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-17 00:40:27.279200 | orchestrator | Wednesday 17 September 2025 00:40:26 +0000 (0:00:00.148) 0:01:10.991 *** 2025-09-17 00:40:27.279208 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'vg_name': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'}) 2025-09-17 00:40:27.279216 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'vg_name': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'}) 2025-09-17 00:40:27.279223 | orchestrator | 2025-09-17 00:40:27.279230 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-17 00:40:27.279243 | orchestrator | Wednesday 17 September 2025 00:40:26 +0000 (0:00:00.172) 0:01:11.163 *** 2025-09-17 00:40:27.279263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:27.279272 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:27.279281 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:27.279289 | orchestrator | 2025-09-17 00:40:27.279297 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-17 00:40:27.279305 | orchestrator | Wednesday 17 September 2025 00:40:26 +0000 (0:00:00.163) 0:01:11.327 *** 2025-09-17 00:40:27.279313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:27.279321 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:27.279330 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:27.279338 | orchestrator | 2025-09-17 00:40:27.279346 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-17 00:40:27.279354 | orchestrator | Wednesday 17 September 2025 00:40:26 +0000 (0:00:00.153) 0:01:11.480 *** 2025-09-17 00:40:27.279363 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'})  2025-09-17 00:40:27.279371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'})  2025-09-17 00:40:27.279379 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:27.279387 | orchestrator | 2025-09-17 00:40:27.279395 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-17 00:40:27.279403 | orchestrator | Wednesday 17 September 2025 00:40:27 +0000 (0:00:00.155) 0:01:11.635 *** 2025-09-17 00:40:27.279411 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 00:40:27.279419 | orchestrator |  "lvm_report": { 2025-09-17 00:40:27.279427 | orchestrator |  "lv": [ 2025-09-17 00:40:27.279436 | orchestrator |  { 2025-09-17 00:40:27.279444 | orchestrator |  "lv_name": "osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e", 2025-09-17 00:40:27.279456 | orchestrator |  "vg_name": "ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e" 2025-09-17 00:40:27.279464 | orchestrator |  }, 2025-09-17 00:40:27.279472 | orchestrator |  { 2025-09-17 00:40:27.279481 | orchestrator |  "lv_name": "osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161", 2025-09-17 00:40:27.279489 | orchestrator |  "vg_name": "ceph-a7b5a8de-6218-5c80-971a-bac3422a4161" 2025-09-17 00:40:27.279497 | orchestrator |  } 2025-09-17 00:40:27.279505 | orchestrator |  ], 2025-09-17 00:40:27.279513 | orchestrator |  "pv": [ 2025-09-17 00:40:27.279521 | orchestrator |  { 2025-09-17 00:40:27.279529 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-17 00:40:27.279537 | orchestrator |  "vg_name": "ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e" 2025-09-17 00:40:27.279545 | orchestrator |  }, 2025-09-17 00:40:27.279552 | orchestrator |  { 2025-09-17 00:40:27.279560 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-17 00:40:27.279568 | orchestrator |  "vg_name": "ceph-a7b5a8de-6218-5c80-971a-bac3422a4161" 2025-09-17 00:40:27.279576 | orchestrator |  } 2025-09-17 00:40:27.279584 | orchestrator |  ] 2025-09-17 00:40:27.279592 | orchestrator |  } 2025-09-17 00:40:27.279599 | orchestrator | } 2025-09-17 00:40:27.279608 | orchestrator | 2025-09-17 00:40:27.279616 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:40:27.279630 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-17 00:40:27.279640 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-17 00:40:27.279651 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-17 00:40:27.279661 | orchestrator | 2025-09-17 00:40:27.279669 | orchestrator | 2025-09-17 00:40:27.279676 | orchestrator | 2025-09-17 00:40:27.279683 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:40:27.279690 | orchestrator | Wednesday 17 September 2025 00:40:27 +0000 (0:00:00.151) 0:01:11.787 *** 2025-09-17 00:40:27.279697 | orchestrator | =============================================================================== 2025-09-17 00:40:27.279704 | orchestrator | Create block VGs -------------------------------------------------------- 5.60s 2025-09-17 00:40:27.279711 | orchestrator | Create block LVs -------------------------------------------------------- 4.11s 2025-09-17 00:40:27.279718 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.97s 2025-09-17 00:40:27.279725 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.89s 2025-09-17 00:40:27.279732 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.65s 2025-09-17 00:40:27.279739 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.63s 2025-09-17 00:40:27.279746 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2025-09-17 00:40:27.279753 | orchestrator | Add known partitions to the list of available block devices ------------- 1.52s 2025-09-17 00:40:27.279764 | orchestrator | Add known links to the list of available block devices ------------------ 1.12s 2025-09-17 00:40:27.629903 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2025-09-17 00:40:27.630003 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-09-17 00:40:27.630085 | orchestrator | Print LVM report data --------------------------------------------------- 0.81s 2025-09-17 00:40:27.630107 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.79s 2025-09-17 00:40:27.630125 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.74s 2025-09-17 00:40:27.630140 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-09-17 00:40:27.630151 | orchestrator | Prepare variables for OSD count check ----------------------------------- 0.72s 2025-09-17 00:40:27.630161 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-09-17 00:40:27.630172 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2025-09-17 00:40:27.630182 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.64s 2025-09-17 00:40:27.630193 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.64s 2025-09-17 00:40:39.854259 | orchestrator | 2025-09-17 00:40:39 | INFO  | Task aaec91c9-b010-442e-b91d-31cd76cb949c (facts) was prepared for execution. 2025-09-17 00:40:39.854380 | orchestrator | 2025-09-17 00:40:39 | INFO  | It takes a moment until task aaec91c9-b010-442e-b91d-31cd76cb949c (facts) has been started and output is visible here. 2025-09-17 00:40:52.669443 | orchestrator | 2025-09-17 00:40:52.669563 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-17 00:40:52.669582 | orchestrator | 2025-09-17 00:40:52.669596 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-17 00:40:52.669608 | orchestrator | Wednesday 17 September 2025 00:40:43 +0000 (0:00:00.268) 0:00:00.268 *** 2025-09-17 00:40:52.669620 | orchestrator | ok: [testbed-manager] 2025-09-17 00:40:52.669632 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:40:52.669678 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:40:52.669707 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:40:52.669718 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:40:52.669740 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:40:52.669750 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:52.669761 | orchestrator | 2025-09-17 00:40:52.669772 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-17 00:40:52.669783 | orchestrator | Wednesday 17 September 2025 00:40:44 +0000 (0:00:01.120) 0:00:01.389 *** 2025-09-17 00:40:52.669794 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:40:52.669806 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:40:52.669817 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:40:52.669828 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:40:52.669839 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:40:52.669849 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:52.669860 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:52.669945 | orchestrator | 2025-09-17 00:40:52.669956 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 00:40:52.669967 | orchestrator | 2025-09-17 00:40:52.669978 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 00:40:52.669989 | orchestrator | Wednesday 17 September 2025 00:40:46 +0000 (0:00:01.225) 0:00:02.614 *** 2025-09-17 00:40:52.669999 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:40:52.670010 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:40:52.670088 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:40:52.670100 | orchestrator | ok: [testbed-manager] 2025-09-17 00:40:52.670111 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:40:52.670121 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:40:52.670132 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:40:52.670143 | orchestrator | 2025-09-17 00:40:52.670154 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-17 00:40:52.670164 | orchestrator | 2025-09-17 00:40:52.670175 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-17 00:40:52.670186 | orchestrator | Wednesday 17 September 2025 00:40:51 +0000 (0:00:05.673) 0:00:08.288 *** 2025-09-17 00:40:52.670197 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:40:52.670208 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:40:52.670218 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:40:52.670229 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:40:52.670240 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:40:52.670250 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:40:52.670261 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:40:52.670272 | orchestrator | 2025-09-17 00:40:52.670282 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:40:52.670293 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:40:52.670305 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:40:52.670316 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:40:52.670327 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:40:52.670338 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:40:52.670349 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:40:52.670359 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:40:52.670381 | orchestrator | 2025-09-17 00:40:52.670392 | orchestrator | 2025-09-17 00:40:52.670403 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:40:52.670414 | orchestrator | Wednesday 17 September 2025 00:40:52 +0000 (0:00:00.523) 0:00:08.811 *** 2025-09-17 00:40:52.670424 | orchestrator | =============================================================================== 2025-09-17 00:40:52.670435 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.67s 2025-09-17 00:40:52.670446 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2025-09-17 00:40:52.670456 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2025-09-17 00:40:52.670468 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-09-17 00:41:04.886183 | orchestrator | 2025-09-17 00:41:04 | INFO  | Task c6d91b46-923a-456f-b719-9e0ade6fd250 (frr) was prepared for execution. 2025-09-17 00:41:04.886295 | orchestrator | 2025-09-17 00:41:04 | INFO  | It takes a moment until task c6d91b46-923a-456f-b719-9e0ade6fd250 (frr) has been started and output is visible here. 2025-09-17 00:41:28.992586 | orchestrator | 2025-09-17 00:41:28.992710 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-17 00:41:28.992725 | orchestrator | 2025-09-17 00:41:28.992738 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-17 00:41:28.992749 | orchestrator | Wednesday 17 September 2025 00:41:08 +0000 (0:00:00.201) 0:00:00.201 *** 2025-09-17 00:41:28.992781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 00:41:28.992794 | orchestrator | 2025-09-17 00:41:28.992805 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-17 00:41:28.992816 | orchestrator | Wednesday 17 September 2025 00:41:09 +0000 (0:00:00.181) 0:00:00.382 *** 2025-09-17 00:41:28.992827 | orchestrator | changed: [testbed-manager] 2025-09-17 00:41:28.992839 | orchestrator | 2025-09-17 00:41:28.992850 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-17 00:41:28.992861 | orchestrator | Wednesday 17 September 2025 00:41:09 +0000 (0:00:00.958) 0:00:01.340 *** 2025-09-17 00:41:28.992906 | orchestrator | changed: [testbed-manager] 2025-09-17 00:41:28.992918 | orchestrator | 2025-09-17 00:41:28.992934 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-17 00:41:28.992945 | orchestrator | Wednesday 17 September 2025 00:41:18 +0000 (0:00:08.827) 0:00:10.168 *** 2025-09-17 00:41:28.992956 | orchestrator | ok: [testbed-manager] 2025-09-17 00:41:28.992968 | orchestrator | 2025-09-17 00:41:28.992978 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-17 00:41:28.992989 | orchestrator | Wednesday 17 September 2025 00:41:20 +0000 (0:00:01.231) 0:00:11.400 *** 2025-09-17 00:41:28.993000 | orchestrator | changed: [testbed-manager] 2025-09-17 00:41:28.993010 | orchestrator | 2025-09-17 00:41:28.993021 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-17 00:41:28.993032 | orchestrator | Wednesday 17 September 2025 00:41:20 +0000 (0:00:00.894) 0:00:12.295 *** 2025-09-17 00:41:28.993042 | orchestrator | ok: [testbed-manager] 2025-09-17 00:41:28.993053 | orchestrator | 2025-09-17 00:41:28.993064 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-17 00:41:28.993075 | orchestrator | Wednesday 17 September 2025 00:41:22 +0000 (0:00:01.143) 0:00:13.438 *** 2025-09-17 00:41:28.993086 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 00:41:28.993096 | orchestrator | 2025-09-17 00:41:28.993107 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-17 00:41:28.993117 | orchestrator | Wednesday 17 September 2025 00:41:22 +0000 (0:00:00.798) 0:00:14.237 *** 2025-09-17 00:41:28.993131 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:41:28.993143 | orchestrator | 2025-09-17 00:41:28.993156 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-17 00:41:28.993190 | orchestrator | Wednesday 17 September 2025 00:41:23 +0000 (0:00:00.159) 0:00:14.396 *** 2025-09-17 00:41:28.993203 | orchestrator | changed: [testbed-manager] 2025-09-17 00:41:28.993215 | orchestrator | 2025-09-17 00:41:28.993227 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-17 00:41:28.993239 | orchestrator | Wednesday 17 September 2025 00:41:23 +0000 (0:00:00.962) 0:00:15.359 *** 2025-09-17 00:41:28.993251 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-17 00:41:28.993264 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-17 00:41:28.993276 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-17 00:41:28.993288 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-17 00:41:28.993300 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-17 00:41:28.993313 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-17 00:41:28.993325 | orchestrator | 2025-09-17 00:41:28.993337 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-17 00:41:28.993348 | orchestrator | Wednesday 17 September 2025 00:41:26 +0000 (0:00:02.121) 0:00:17.481 *** 2025-09-17 00:41:28.993360 | orchestrator | ok: [testbed-manager] 2025-09-17 00:41:28.993373 | orchestrator | 2025-09-17 00:41:28.993385 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-17 00:41:28.993397 | orchestrator | Wednesday 17 September 2025 00:41:27 +0000 (0:00:01.295) 0:00:18.777 *** 2025-09-17 00:41:28.993408 | orchestrator | changed: [testbed-manager] 2025-09-17 00:41:28.993420 | orchestrator | 2025-09-17 00:41:28.993432 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:41:28.993445 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:41:28.993457 | orchestrator | 2025-09-17 00:41:28.993469 | orchestrator | 2025-09-17 00:41:28.993482 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:41:28.993492 | orchestrator | Wednesday 17 September 2025 00:41:28 +0000 (0:00:01.344) 0:00:20.122 *** 2025-09-17 00:41:28.993503 | orchestrator | =============================================================================== 2025-09-17 00:41:28.993513 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.83s 2025-09-17 00:41:28.993524 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.12s 2025-09-17 00:41:28.993534 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.35s 2025-09-17 00:41:28.993545 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.30s 2025-09-17 00:41:28.993572 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.23s 2025-09-17 00:41:28.993584 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.14s 2025-09-17 00:41:28.993595 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.96s 2025-09-17 00:41:28.993606 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 0.96s 2025-09-17 00:41:28.993616 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.89s 2025-09-17 00:41:28.993627 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.80s 2025-09-17 00:41:28.993638 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.18s 2025-09-17 00:41:28.993649 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-17 00:41:29.242185 | orchestrator | 2025-09-17 00:41:29.245156 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Sep 17 00:41:29 UTC 2025 2025-09-17 00:41:29.245213 | orchestrator | 2025-09-17 00:41:31.075561 | orchestrator | 2025-09-17 00:41:31 | INFO  | Collection nutshell is prepared for execution 2025-09-17 00:41:31.075664 | orchestrator | 2025-09-17 00:41:31 | INFO  | D [0] - dotfiles 2025-09-17 00:41:41.188670 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [0] - homer 2025-09-17 00:41:41.188776 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [0] - netdata 2025-09-17 00:41:41.188790 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [0] - openstackclient 2025-09-17 00:41:41.188812 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [0] - phpmyadmin 2025-09-17 00:41:41.189283 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [0] - common 2025-09-17 00:41:41.193145 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [1] -- loadbalancer 2025-09-17 00:41:41.193526 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [2] --- opensearch 2025-09-17 00:41:41.193819 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [2] --- mariadb-ng 2025-09-17 00:41:41.194190 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [3] ---- horizon 2025-09-17 00:41:41.194471 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [3] ---- keystone 2025-09-17 00:41:41.194794 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [4] ----- neutron 2025-09-17 00:41:41.195324 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [5] ------ wait-for-nova 2025-09-17 00:41:41.195405 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [5] ------ octavia 2025-09-17 00:41:41.196625 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [4] ----- barbican 2025-09-17 00:41:41.196925 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [4] ----- designate 2025-09-17 00:41:41.197119 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [4] ----- ironic 2025-09-17 00:41:41.197378 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [4] ----- placement 2025-09-17 00:41:41.197612 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [4] ----- magnum 2025-09-17 00:41:41.198411 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [1] -- openvswitch 2025-09-17 00:41:41.198655 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [2] --- ovn 2025-09-17 00:41:41.199059 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [1] -- memcached 2025-09-17 00:41:41.199367 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [1] -- redis 2025-09-17 00:41:41.199480 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [1] -- rabbitmq-ng 2025-09-17 00:41:41.199897 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [0] - kubernetes 2025-09-17 00:41:41.202311 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [1] -- kubeconfig 2025-09-17 00:41:41.202330 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [1] -- copy-kubeconfig 2025-09-17 00:41:41.202678 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [0] - ceph 2025-09-17 00:41:41.204773 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [1] -- ceph-pools 2025-09-17 00:41:41.204791 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [2] --- copy-ceph-keys 2025-09-17 00:41:41.205017 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [3] ---- cephclient 2025-09-17 00:41:41.205039 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-17 00:41:41.205148 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [4] ----- wait-for-keystone 2025-09-17 00:41:41.205366 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-17 00:41:41.205384 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [5] ------ glance 2025-09-17 00:41:41.205510 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [5] ------ cinder 2025-09-17 00:41:41.205741 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [5] ------ nova 2025-09-17 00:41:41.206283 | orchestrator | 2025-09-17 00:41:41 | INFO  | A [4] ----- prometheus 2025-09-17 00:41:41.206304 | orchestrator | 2025-09-17 00:41:41 | INFO  | D [5] ------ grafana 2025-09-17 00:41:41.391155 | orchestrator | 2025-09-17 00:41:41 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-17 00:41:41.391227 | orchestrator | 2025-09-17 00:41:41 | INFO  | Tasks are running in the background 2025-09-17 00:41:44.323033 | orchestrator | 2025-09-17 00:41:44 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-17 00:41:46.436249 | orchestrator | 2025-09-17 00:41:46 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:41:46.436369 | orchestrator | 2025-09-17 00:41:46 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:41:46.436745 | orchestrator | 2025-09-17 00:41:46 | INFO  | Task d0c9821e-f2a1-446f-80cb-1c7771081b54 is in state STARTED 2025-09-17 00:41:46.437290 | orchestrator | 2025-09-17 00:41:46 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:41:46.437831 | orchestrator | 2025-09-17 00:41:46 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:41:46.440091 | orchestrator | 2025-09-17 00:41:46 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:41:46.441299 | orchestrator | 2025-09-17 00:41:46 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:41:46.441324 | orchestrator | 2025-09-17 00:41:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:41:49.492813 | orchestrator | 2025-09-17 00:41:49 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:41:49.493147 | orchestrator | 2025-09-17 00:41:49 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:41:49.493175 | orchestrator | 2025-09-17 00:41:49 | INFO  | Task d0c9821e-f2a1-446f-80cb-1c7771081b54 is in state STARTED 2025-09-17 00:41:49.493646 | orchestrator | 2025-09-17 00:41:49 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:41:49.494330 | orchestrator | 2025-09-17 00:41:49 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:41:49.494736 | orchestrator | 2025-09-17 00:41:49 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:41:49.498080 | orchestrator | 2025-09-17 00:41:49 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:41:49.498109 | orchestrator | 2025-09-17 00:41:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:41:52.544495 | orchestrator | 2025-09-17 00:41:52 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:41:52.547310 | orchestrator | 2025-09-17 00:41:52 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:41:52.547932 | orchestrator | 2025-09-17 00:41:52 | INFO  | Task d0c9821e-f2a1-446f-80cb-1c7771081b54 is in state STARTED 2025-09-17 00:41:52.548378 | orchestrator | 2025-09-17 00:41:52 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:41:52.549251 | orchestrator | 2025-09-17 00:41:52 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:41:52.551048 | orchestrator | 2025-09-17 00:41:52 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:41:52.551575 | orchestrator | 2025-09-17 00:41:52 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:41:52.551599 | orchestrator | 2025-09-17 00:41:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:41:55.624167 | orchestrator | 2025-09-17 00:41:55 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:41:55.624270 | orchestrator | 2025-09-17 00:41:55 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:41:55.624284 | orchestrator | 2025-09-17 00:41:55 | INFO  | Task d0c9821e-f2a1-446f-80cb-1c7771081b54 is in state STARTED 2025-09-17 00:41:55.624297 | orchestrator | 2025-09-17 00:41:55 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:41:55.624308 | orchestrator | 2025-09-17 00:41:55 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:41:55.624319 | orchestrator | 2025-09-17 00:41:55 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:41:55.624330 | orchestrator | 2025-09-17 00:41:55 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:41:55.624341 | orchestrator | 2025-09-17 00:41:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:41:58.849045 | orchestrator | 2025-09-17 00:41:58 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:41:58.849149 | orchestrator | 2025-09-17 00:41:58 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:41:58.849695 | orchestrator | 2025-09-17 00:41:58 | INFO  | Task d0c9821e-f2a1-446f-80cb-1c7771081b54 is in state STARTED 2025-09-17 00:41:58.850133 | orchestrator | 2025-09-17 00:41:58 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:41:58.850545 | orchestrator | 2025-09-17 00:41:58 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:41:58.851070 | orchestrator | 2025-09-17 00:41:58 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:41:58.851606 | orchestrator | 2025-09-17 00:41:58 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:41:58.851628 | orchestrator | 2025-09-17 00:41:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:02.016160 | orchestrator | 2025-09-17 00:42:02 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:02.016379 | orchestrator | 2025-09-17 00:42:02 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:02.016403 | orchestrator | 2025-09-17 00:42:02 | INFO  | Task d0c9821e-f2a1-446f-80cb-1c7771081b54 is in state STARTED 2025-09-17 00:42:02.016426 | orchestrator | 2025-09-17 00:42:02 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:02.028821 | orchestrator | 2025-09-17 00:42:02 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:02.028857 | orchestrator | 2025-09-17 00:42:02 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:02.028869 | orchestrator | 2025-09-17 00:42:02 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:42:02.028908 | orchestrator | 2025-09-17 00:42:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:05.129561 | orchestrator | 2025-09-17 00:42:05 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:05.130476 | orchestrator | 2025-09-17 00:42:05 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:05.132106 | orchestrator | 2025-09-17 00:42:05 | INFO  | Task d0c9821e-f2a1-446f-80cb-1c7771081b54 is in state STARTED 2025-09-17 00:42:05.132131 | orchestrator | 2025-09-17 00:42:05 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:05.133173 | orchestrator | 2025-09-17 00:42:05 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:05.133729 | orchestrator | 2025-09-17 00:42:05 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:05.134374 | orchestrator | 2025-09-17 00:42:05 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:42:05.134397 | orchestrator | 2025-09-17 00:42:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:08.337687 | orchestrator | 2025-09-17 00:42:08 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:08.338712 | orchestrator | 2025-09-17 00:42:08.338760 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-17 00:42:08.338773 | orchestrator | 2025-09-17 00:42:08.338785 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-17 00:42:08.338797 | orchestrator | Wednesday 17 September 2025 00:41:53 +0000 (0:00:00.811) 0:00:00.811 *** 2025-09-17 00:42:08.338809 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:42:08.338821 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:42:08.338832 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:42:08.338844 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:42:08.338855 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:42:08.338866 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:42:08.338910 | orchestrator | changed: [testbed-manager] 2025-09-17 00:42:08.338921 | orchestrator | 2025-09-17 00:42:08.338933 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-17 00:42:08.338944 | orchestrator | Wednesday 17 September 2025 00:41:58 +0000 (0:00:04.885) 0:00:05.696 *** 2025-09-17 00:42:08.338955 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-17 00:42:08.338966 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-17 00:42:08.338977 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-17 00:42:08.338988 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-17 00:42:08.338998 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-17 00:42:08.339009 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-17 00:42:08.339019 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-17 00:42:08.339030 | orchestrator | 2025-09-17 00:42:08.339040 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-17 00:42:08.339051 | orchestrator | Wednesday 17 September 2025 00:41:59 +0000 (0:00:01.702) 0:00:07.399 *** 2025-09-17 00:42:08.339073 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 00:41:58.693132', 'end': '2025-09-17 00:41:58.700181', 'delta': '0:00:00.007049', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 00:42:08.339089 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 00:41:59.452420', 'end': '2025-09-17 00:41:59.462060', 'delta': '0:00:00.009640', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 00:42:08.339121 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 00:41:58.748989', 'end': '2025-09-17 00:41:58.759700', 'delta': '0:00:00.010711', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 00:42:08.339156 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 00:41:59.633129', 'end': '2025-09-17 00:41:59.643305', 'delta': '0:00:00.010176', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 00:42:08.339168 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 00:41:59.344686', 'end': '2025-09-17 00:41:59.353373', 'delta': '0:00:00.008687', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 00:42:08.339432 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 00:41:59.306600', 'end': '2025-09-17 00:41:59.315293', 'delta': '0:00:00.008693', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 00:42:08.339447 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 00:41:59.116842', 'end': '2025-09-17 00:41:59.126090', 'delta': '0:00:00.009248', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 00:42:08.339475 | orchestrator | 2025-09-17 00:42:08.339489 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-17 00:42:08.339502 | orchestrator | Wednesday 17 September 2025 00:42:01 +0000 (0:00:01.324) 0:00:08.723 *** 2025-09-17 00:42:08.339514 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-17 00:42:08.339527 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-17 00:42:08.339540 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-17 00:42:08.339552 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-17 00:42:08.339565 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-17 00:42:08.339577 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-17 00:42:08.339590 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-17 00:42:08.339602 | orchestrator | 2025-09-17 00:42:08.339620 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-17 00:42:08.339633 | orchestrator | Wednesday 17 September 2025 00:42:02 +0000 (0:00:01.617) 0:00:10.341 *** 2025-09-17 00:42:08.339646 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-17 00:42:08.339657 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-17 00:42:08.339668 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-17 00:42:08.339679 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-17 00:42:08.339690 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-17 00:42:08.339700 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-17 00:42:08.339711 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-17 00:42:08.339722 | orchestrator | 2025-09-17 00:42:08.339733 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:42:08.339753 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:42:08.339767 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:42:08.339778 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:42:08.339790 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:42:08.339800 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:42:08.339811 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:42:08.339822 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:42:08.339833 | orchestrator | 2025-09-17 00:42:08.339843 | orchestrator | 2025-09-17 00:42:08.339854 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:42:08.339865 | orchestrator | Wednesday 17 September 2025 00:42:05 +0000 (0:00:02.763) 0:00:13.105 *** 2025-09-17 00:42:08.339900 | orchestrator | =============================================================================== 2025-09-17 00:42:08.339912 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.89s 2025-09-17 00:42:08.339933 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.76s 2025-09-17 00:42:08.339943 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.70s 2025-09-17 00:42:08.339954 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.62s 2025-09-17 00:42:08.339965 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.32s 2025-09-17 00:42:08.339976 | orchestrator | 2025-09-17 00:42:08 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:08.339987 | orchestrator | 2025-09-17 00:42:08 | INFO  | Task d0c9821e-f2a1-446f-80cb-1c7771081b54 is in state SUCCESS 2025-09-17 00:42:08.339998 | orchestrator | 2025-09-17 00:42:08 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:08.340009 | orchestrator | 2025-09-17 00:42:08 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:08.340019 | orchestrator | 2025-09-17 00:42:08 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:08.340030 | orchestrator | 2025-09-17 00:42:08 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:42:08.340041 | orchestrator | 2025-09-17 00:42:08 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:08.340052 | orchestrator | 2025-09-17 00:42:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:11.371383 | orchestrator | 2025-09-17 00:42:11 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:11.371492 | orchestrator | 2025-09-17 00:42:11 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:11.371508 | orchestrator | 2025-09-17 00:42:11 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:11.371520 | orchestrator | 2025-09-17 00:42:11 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:11.371530 | orchestrator | 2025-09-17 00:42:11 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:11.371542 | orchestrator | 2025-09-17 00:42:11 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:42:11.371573 | orchestrator | 2025-09-17 00:42:11 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:11.371585 | orchestrator | 2025-09-17 00:42:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:14.368468 | orchestrator | 2025-09-17 00:42:14 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:14.368569 | orchestrator | 2025-09-17 00:42:14 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:14.368584 | orchestrator | 2025-09-17 00:42:14 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:14.368596 | orchestrator | 2025-09-17 00:42:14 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:14.368606 | orchestrator | 2025-09-17 00:42:14 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:14.368617 | orchestrator | 2025-09-17 00:42:14 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:42:14.368627 | orchestrator | 2025-09-17 00:42:14 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:14.368638 | orchestrator | 2025-09-17 00:42:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:17.394102 | orchestrator | 2025-09-17 00:42:17 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:17.394747 | orchestrator | 2025-09-17 00:42:17 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:17.396008 | orchestrator | 2025-09-17 00:42:17 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:17.396616 | orchestrator | 2025-09-17 00:42:17 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:17.397338 | orchestrator | 2025-09-17 00:42:17 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:17.398007 | orchestrator | 2025-09-17 00:42:17 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:42:17.398683 | orchestrator | 2025-09-17 00:42:17 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:17.398709 | orchestrator | 2025-09-17 00:42:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:20.430305 | orchestrator | 2025-09-17 00:42:20 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:20.431898 | orchestrator | 2025-09-17 00:42:20 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:20.432831 | orchestrator | 2025-09-17 00:42:20 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:20.433824 | orchestrator | 2025-09-17 00:42:20 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:20.434848 | orchestrator | 2025-09-17 00:42:20 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:20.436957 | orchestrator | 2025-09-17 00:42:20 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:42:20.437195 | orchestrator | 2025-09-17 00:42:20 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:20.438712 | orchestrator | 2025-09-17 00:42:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:23.558441 | orchestrator | 2025-09-17 00:42:23 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:23.558549 | orchestrator | 2025-09-17 00:42:23 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:23.558566 | orchestrator | 2025-09-17 00:42:23 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:23.558578 | orchestrator | 2025-09-17 00:42:23 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:23.558589 | orchestrator | 2025-09-17 00:42:23 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:23.558600 | orchestrator | 2025-09-17 00:42:23 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:42:23.558611 | orchestrator | 2025-09-17 00:42:23 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:23.558622 | orchestrator | 2025-09-17 00:42:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:26.776236 | orchestrator | 2025-09-17 00:42:26 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:26.776343 | orchestrator | 2025-09-17 00:42:26 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:26.776378 | orchestrator | 2025-09-17 00:42:26 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:26.776389 | orchestrator | 2025-09-17 00:42:26 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:26.776400 | orchestrator | 2025-09-17 00:42:26 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:26.776435 | orchestrator | 2025-09-17 00:42:26 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state STARTED 2025-09-17 00:42:26.776446 | orchestrator | 2025-09-17 00:42:26 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:26.776458 | orchestrator | 2025-09-17 00:42:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:29.785618 | orchestrator | 2025-09-17 00:42:29 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:29.785741 | orchestrator | 2025-09-17 00:42:29 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:29.787470 | orchestrator | 2025-09-17 00:42:29 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:29.788106 | orchestrator | 2025-09-17 00:42:29 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:29.789096 | orchestrator | 2025-09-17 00:42:29 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:29.789401 | orchestrator | 2025-09-17 00:42:29 | INFO  | Task a7c40d4f-de65-40e2-80ae-461b1097a720 is in state SUCCESS 2025-09-17 00:42:29.790350 | orchestrator | 2025-09-17 00:42:29 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:29.790383 | orchestrator | 2025-09-17 00:42:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:33.004373 | orchestrator | 2025-09-17 00:42:32 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:33.004478 | orchestrator | 2025-09-17 00:42:32 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:33.004493 | orchestrator | 2025-09-17 00:42:32 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:33.004506 | orchestrator | 2025-09-17 00:42:32 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:33.004517 | orchestrator | 2025-09-17 00:42:32 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:33.004527 | orchestrator | 2025-09-17 00:42:32 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:33.004538 | orchestrator | 2025-09-17 00:42:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:36.048263 | orchestrator | 2025-09-17 00:42:36 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state STARTED 2025-09-17 00:42:36.054063 | orchestrator | 2025-09-17 00:42:36 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:36.060100 | orchestrator | 2025-09-17 00:42:36 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:36.069961 | orchestrator | 2025-09-17 00:42:36 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:36.070003 | orchestrator | 2025-09-17 00:42:36 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:36.072462 | orchestrator | 2025-09-17 00:42:36 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:36.073830 | orchestrator | 2025-09-17 00:42:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:39.136309 | orchestrator | 2025-09-17 00:42:39 | INFO  | Task de4de644-2c35-44d9-a2ab-9b8aba83a35b is in state SUCCESS 2025-09-17 00:42:39.136510 | orchestrator | 2025-09-17 00:42:39 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:39.137206 | orchestrator | 2025-09-17 00:42:39 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:39.142481 | orchestrator | 2025-09-17 00:42:39 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:39.144256 | orchestrator | 2025-09-17 00:42:39 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:39.146087 | orchestrator | 2025-09-17 00:42:39 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:39.146528 | orchestrator | 2025-09-17 00:42:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:42.216418 | orchestrator | 2025-09-17 00:42:42 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:42.216560 | orchestrator | 2025-09-17 00:42:42 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:42.216576 | orchestrator | 2025-09-17 00:42:42 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:42.216589 | orchestrator | 2025-09-17 00:42:42 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:42.216600 | orchestrator | 2025-09-17 00:42:42 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:42.216612 | orchestrator | 2025-09-17 00:42:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:45.240612 | orchestrator | 2025-09-17 00:42:45 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:45.241084 | orchestrator | 2025-09-17 00:42:45 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:45.242196 | orchestrator | 2025-09-17 00:42:45 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:45.242565 | orchestrator | 2025-09-17 00:42:45 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:45.243204 | orchestrator | 2025-09-17 00:42:45 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:45.243227 | orchestrator | 2025-09-17 00:42:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:48.291449 | orchestrator | 2025-09-17 00:42:48 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:48.291552 | orchestrator | 2025-09-17 00:42:48 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:48.293330 | orchestrator | 2025-09-17 00:42:48 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:48.294531 | orchestrator | 2025-09-17 00:42:48 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:48.296690 | orchestrator | 2025-09-17 00:42:48 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:48.296718 | orchestrator | 2025-09-17 00:42:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:51.340408 | orchestrator | 2025-09-17 00:42:51 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:51.341230 | orchestrator | 2025-09-17 00:42:51 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:51.342628 | orchestrator | 2025-09-17 00:42:51 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:51.344453 | orchestrator | 2025-09-17 00:42:51 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:51.345447 | orchestrator | 2025-09-17 00:42:51 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:51.345471 | orchestrator | 2025-09-17 00:42:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:54.381147 | orchestrator | 2025-09-17 00:42:54 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:54.382696 | orchestrator | 2025-09-17 00:42:54 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:54.387233 | orchestrator | 2025-09-17 00:42:54 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:54.390198 | orchestrator | 2025-09-17 00:42:54 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:54.392220 | orchestrator | 2025-09-17 00:42:54 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:54.392864 | orchestrator | 2025-09-17 00:42:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:42:57.428136 | orchestrator | 2025-09-17 00:42:57 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:42:57.430631 | orchestrator | 2025-09-17 00:42:57 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:42:57.434113 | orchestrator | 2025-09-17 00:42:57 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state STARTED 2025-09-17 00:42:57.437743 | orchestrator | 2025-09-17 00:42:57 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:42:57.440289 | orchestrator | 2025-09-17 00:42:57 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:42:57.441480 | orchestrator | 2025-09-17 00:42:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:00.487567 | orchestrator | 2025-09-17 00:43:00 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:00.489543 | orchestrator | 2025-09-17 00:43:00 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:00.490684 | orchestrator | 2025-09-17 00:43:00 | INFO  | Task b4847093-53e9-4fda-a814-30f391c7f045 is in state SUCCESS 2025-09-17 00:43:00.493520 | orchestrator | 2025-09-17 00:43:00.493558 | orchestrator | 2025-09-17 00:43:00.493571 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-17 00:43:00.493582 | orchestrator | 2025-09-17 00:43:00.493593 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-17 00:43:00.493605 | orchestrator | Wednesday 17 September 2025 00:41:53 +0000 (0:00:00.590) 0:00:00.590 *** 2025-09-17 00:43:00.493616 | orchestrator | ok: [testbed-manager] => { 2025-09-17 00:43:00.493629 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-17 00:43:00.493642 | orchestrator | } 2025-09-17 00:43:00.493653 | orchestrator | 2025-09-17 00:43:00.493664 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-17 00:43:00.493674 | orchestrator | Wednesday 17 September 2025 00:41:53 +0000 (0:00:00.156) 0:00:00.747 *** 2025-09-17 00:43:00.493685 | orchestrator | ok: [testbed-manager] 2025-09-17 00:43:00.493696 | orchestrator | 2025-09-17 00:43:00.493707 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-17 00:43:00.493718 | orchestrator | Wednesday 17 September 2025 00:41:54 +0000 (0:00:01.316) 0:00:02.063 *** 2025-09-17 00:43:00.493728 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-17 00:43:00.493739 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-17 00:43:00.493750 | orchestrator | 2025-09-17 00:43:00.493760 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-17 00:43:00.493771 | orchestrator | Wednesday 17 September 2025 00:41:57 +0000 (0:00:02.270) 0:00:04.333 *** 2025-09-17 00:43:00.493781 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.493792 | orchestrator | 2025-09-17 00:43:00.493803 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-17 00:43:00.493833 | orchestrator | Wednesday 17 September 2025 00:41:58 +0000 (0:00:01.624) 0:00:05.957 *** 2025-09-17 00:43:00.493844 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.493855 | orchestrator | 2025-09-17 00:43:00.493865 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-17 00:43:00.493876 | orchestrator | Wednesday 17 September 2025 00:42:00 +0000 (0:00:01.622) 0:00:07.580 *** 2025-09-17 00:43:00.493910 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-17 00:43:00.493921 | orchestrator | ok: [testbed-manager] 2025-09-17 00:43:00.493932 | orchestrator | 2025-09-17 00:43:00.493970 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-17 00:43:00.493982 | orchestrator | Wednesday 17 September 2025 00:42:26 +0000 (0:00:26.175) 0:00:33.756 *** 2025-09-17 00:43:00.493993 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.494003 | orchestrator | 2025-09-17 00:43:00.494072 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:43:00.494087 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:43:00.494101 | orchestrator | 2025-09-17 00:43:00.494114 | orchestrator | 2025-09-17 00:43:00.494126 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:43:00.494138 | orchestrator | Wednesday 17 September 2025 00:42:29 +0000 (0:00:03.080) 0:00:36.836 *** 2025-09-17 00:43:00.494151 | orchestrator | =============================================================================== 2025-09-17 00:43:00.494164 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.18s 2025-09-17 00:43:00.494177 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.08s 2025-09-17 00:43:00.494189 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.27s 2025-09-17 00:43:00.494201 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.62s 2025-09-17 00:43:00.494214 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.62s 2025-09-17 00:43:00.494226 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.32s 2025-09-17 00:43:00.494239 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.16s 2025-09-17 00:43:00.494251 | orchestrator | 2025-09-17 00:43:00.494264 | orchestrator | 2025-09-17 00:43:00.494276 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-17 00:43:00.494288 | orchestrator | 2025-09-17 00:43:00.494301 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-17 00:43:00.494313 | orchestrator | Wednesday 17 September 2025 00:41:53 +0000 (0:00:01.001) 0:00:01.001 *** 2025-09-17 00:43:00.494326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-17 00:43:00.494340 | orchestrator | 2025-09-17 00:43:00.494353 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-17 00:43:00.494365 | orchestrator | Wednesday 17 September 2025 00:41:53 +0000 (0:00:00.366) 0:00:01.367 *** 2025-09-17 00:43:00.494378 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-17 00:43:00.494391 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-17 00:43:00.494403 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-17 00:43:00.494416 | orchestrator | 2025-09-17 00:43:00.494429 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-17 00:43:00.494442 | orchestrator | Wednesday 17 September 2025 00:41:57 +0000 (0:00:03.235) 0:00:04.603 *** 2025-09-17 00:43:00.494453 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.494464 | orchestrator | 2025-09-17 00:43:00.494475 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-17 00:43:00.494486 | orchestrator | Wednesday 17 September 2025 00:41:58 +0000 (0:00:01.247) 0:00:05.851 *** 2025-09-17 00:43:00.494517 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-17 00:43:00.494529 | orchestrator | ok: [testbed-manager] 2025-09-17 00:43:00.494540 | orchestrator | 2025-09-17 00:43:00.494551 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-17 00:43:00.494561 | orchestrator | Wednesday 17 September 2025 00:42:31 +0000 (0:00:32.983) 0:00:38.834 *** 2025-09-17 00:43:00.494572 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.494583 | orchestrator | 2025-09-17 00:43:00.494625 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-17 00:43:00.494637 | orchestrator | Wednesday 17 September 2025 00:42:32 +0000 (0:00:01.601) 0:00:40.436 *** 2025-09-17 00:43:00.494648 | orchestrator | ok: [testbed-manager] 2025-09-17 00:43:00.494659 | orchestrator | 2025-09-17 00:43:00.494670 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-17 00:43:00.494681 | orchestrator | Wednesday 17 September 2025 00:42:33 +0000 (0:00:01.116) 0:00:41.552 *** 2025-09-17 00:43:00.494692 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.494703 | orchestrator | 2025-09-17 00:43:00.494714 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-17 00:43:00.494725 | orchestrator | Wednesday 17 September 2025 00:42:36 +0000 (0:00:02.522) 0:00:44.075 *** 2025-09-17 00:43:00.494736 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.494746 | orchestrator | 2025-09-17 00:43:00.494757 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-17 00:43:00.494768 | orchestrator | Wednesday 17 September 2025 00:42:37 +0000 (0:00:01.011) 0:00:45.087 *** 2025-09-17 00:43:00.494779 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.494790 | orchestrator | 2025-09-17 00:43:00.494800 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-17 00:43:00.494811 | orchestrator | Wednesday 17 September 2025 00:42:38 +0000 (0:00:00.578) 0:00:45.666 *** 2025-09-17 00:43:00.494822 | orchestrator | ok: [testbed-manager] 2025-09-17 00:43:00.494833 | orchestrator | 2025-09-17 00:43:00.494844 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:43:00.494854 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:43:00.494865 | orchestrator | 2025-09-17 00:43:00.494876 | orchestrator | 2025-09-17 00:43:00.494924 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:43:00.494935 | orchestrator | Wednesday 17 September 2025 00:42:38 +0000 (0:00:00.611) 0:00:46.277 *** 2025-09-17 00:43:00.494957 | orchestrator | =============================================================================== 2025-09-17 00:43:00.494968 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.98s 2025-09-17 00:43:00.494979 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.24s 2025-09-17 00:43:00.494990 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.52s 2025-09-17 00:43:00.495000 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.60s 2025-09-17 00:43:00.495011 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.25s 2025-09-17 00:43:00.495022 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.12s 2025-09-17 00:43:00.495033 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.01s 2025-09-17 00:43:00.495043 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.61s 2025-09-17 00:43:00.495054 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.58s 2025-09-17 00:43:00.495065 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.37s 2025-09-17 00:43:00.495076 | orchestrator | 2025-09-17 00:43:00.495087 | orchestrator | 2025-09-17 00:43:00.495097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:43:00.495115 | orchestrator | 2025-09-17 00:43:00.495127 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:43:00.495137 | orchestrator | Wednesday 17 September 2025 00:41:52 +0000 (0:00:00.471) 0:00:00.471 *** 2025-09-17 00:43:00.495148 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-17 00:43:00.495159 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-17 00:43:00.495170 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-17 00:43:00.495180 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-17 00:43:00.495191 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-17 00:43:00.495202 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-17 00:43:00.495212 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-17 00:43:00.495223 | orchestrator | 2025-09-17 00:43:00.495234 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-17 00:43:00.495244 | orchestrator | 2025-09-17 00:43:00.495255 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-17 00:43:00.495266 | orchestrator | Wednesday 17 September 2025 00:41:54 +0000 (0:00:02.396) 0:00:02.868 *** 2025-09-17 00:43:00.495294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:43:00.495308 | orchestrator | 2025-09-17 00:43:00.495320 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-17 00:43:00.495330 | orchestrator | Wednesday 17 September 2025 00:41:56 +0000 (0:00:01.275) 0:00:04.144 *** 2025-09-17 00:43:00.495341 | orchestrator | ok: [testbed-manager] 2025-09-17 00:43:00.495352 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:43:00.495363 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:43:00.495374 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:43:00.495384 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:43:00.495400 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:43:00.495412 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:43:00.495422 | orchestrator | 2025-09-17 00:43:00.495433 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-17 00:43:00.495444 | orchestrator | Wednesday 17 September 2025 00:41:57 +0000 (0:00:01.579) 0:00:05.724 *** 2025-09-17 00:43:00.495455 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:43:00.495466 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:43:00.495476 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:43:00.495487 | orchestrator | ok: [testbed-manager] 2025-09-17 00:43:00.495498 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:43:00.495508 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:43:00.495519 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:43:00.495529 | orchestrator | 2025-09-17 00:43:00.495540 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-17 00:43:00.495551 | orchestrator | Wednesday 17 September 2025 00:42:00 +0000 (0:00:03.111) 0:00:08.836 *** 2025-09-17 00:43:00.495562 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:43:00.495573 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:43:00.495583 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:43:00.495594 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:43:00.495605 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:43:00.495615 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:43:00.495626 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.495637 | orchestrator | 2025-09-17 00:43:00.495647 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-17 00:43:00.495658 | orchestrator | Wednesday 17 September 2025 00:42:03 +0000 (0:00:02.324) 0:00:11.160 *** 2025-09-17 00:43:00.495669 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:43:00.495680 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:43:00.495691 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:43:00.495707 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:43:00.495718 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:43:00.495729 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:43:00.495739 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.495750 | orchestrator | 2025-09-17 00:43:00.495761 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-17 00:43:00.495771 | orchestrator | Wednesday 17 September 2025 00:42:14 +0000 (0:00:10.948) 0:00:22.108 *** 2025-09-17 00:43:00.495782 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:43:00.495793 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:43:00.495803 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:43:00.495814 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:43:00.495824 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:43:00.495835 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:43:00.495846 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.495856 | orchestrator | 2025-09-17 00:43:00.495867 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-17 00:43:00.495878 | orchestrator | Wednesday 17 September 2025 00:42:39 +0000 (0:00:25.232) 0:00:47.341 *** 2025-09-17 00:43:00.495925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:43:00.495938 | orchestrator | 2025-09-17 00:43:00.495949 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-17 00:43:00.495960 | orchestrator | Wednesday 17 September 2025 00:42:40 +0000 (0:00:01.171) 0:00:48.512 *** 2025-09-17 00:43:00.495971 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-17 00:43:00.495982 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-17 00:43:00.495993 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-17 00:43:00.496003 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-17 00:43:00.496014 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-17 00:43:00.496025 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-17 00:43:00.496036 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-17 00:43:00.496046 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-17 00:43:00.496057 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-17 00:43:00.496067 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-17 00:43:00.496078 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-17 00:43:00.496089 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-17 00:43:00.496099 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-17 00:43:00.496110 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-17 00:43:00.496121 | orchestrator | 2025-09-17 00:43:00.496131 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-17 00:43:00.496142 | orchestrator | Wednesday 17 September 2025 00:42:45 +0000 (0:00:04.550) 0:00:53.063 *** 2025-09-17 00:43:00.496153 | orchestrator | ok: [testbed-manager] 2025-09-17 00:43:00.496164 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:43:00.496175 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:43:00.496185 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:43:00.496196 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:43:00.496207 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:43:00.496217 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:43:00.496228 | orchestrator | 2025-09-17 00:43:00.496239 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-17 00:43:00.496249 | orchestrator | Wednesday 17 September 2025 00:42:46 +0000 (0:00:01.120) 0:00:54.184 *** 2025-09-17 00:43:00.496265 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.496276 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:43:00.496293 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:43:00.496304 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:43:00.496315 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:43:00.496325 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:43:00.496336 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:43:00.496347 | orchestrator | 2025-09-17 00:43:00.496357 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-17 00:43:00.496375 | orchestrator | Wednesday 17 September 2025 00:42:47 +0000 (0:00:01.523) 0:00:55.708 *** 2025-09-17 00:43:00.496386 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:43:00.496397 | orchestrator | ok: [testbed-manager] 2025-09-17 00:43:00.496407 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:43:00.496418 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:43:00.496429 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:43:00.496440 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:43:00.496450 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:43:00.496461 | orchestrator | 2025-09-17 00:43:00.496472 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-17 00:43:00.496483 | orchestrator | Wednesday 17 September 2025 00:42:49 +0000 (0:00:01.593) 0:00:57.302 *** 2025-09-17 00:43:00.496493 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:43:00.496504 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:43:00.496515 | orchestrator | ok: [testbed-manager] 2025-09-17 00:43:00.496525 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:43:00.496536 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:43:00.496546 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:43:00.496557 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:43:00.496567 | orchestrator | 2025-09-17 00:43:00.496578 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-17 00:43:00.496589 | orchestrator | Wednesday 17 September 2025 00:42:51 +0000 (0:00:02.182) 0:00:59.485 *** 2025-09-17 00:43:00.496600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-17 00:43:00.496613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:43:00.496624 | orchestrator | 2025-09-17 00:43:00.496635 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-17 00:43:00.496646 | orchestrator | Wednesday 17 September 2025 00:42:53 +0000 (0:00:01.758) 0:01:01.243 *** 2025-09-17 00:43:00.496657 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.496667 | orchestrator | 2025-09-17 00:43:00.496678 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-17 00:43:00.496689 | orchestrator | Wednesday 17 September 2025 00:42:55 +0000 (0:00:02.767) 0:01:04.011 *** 2025-09-17 00:43:00.496699 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:43:00.496710 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:43:00.496721 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:43:00.496731 | orchestrator | changed: [testbed-manager] 2025-09-17 00:43:00.496742 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:43:00.496752 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:43:00.496763 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:43:00.496774 | orchestrator | 2025-09-17 00:43:00.496785 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:43:00.496795 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:43:00.496806 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:43:00.496817 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:43:00.496828 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:43:00.496845 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:43:00.496856 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:43:00.496867 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:43:00.496877 | orchestrator | 2025-09-17 00:43:00.496905 | orchestrator | 2025-09-17 00:43:00.496916 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:43:00.496926 | orchestrator | Wednesday 17 September 2025 00:42:59 +0000 (0:00:03.247) 0:01:07.258 *** 2025-09-17 00:43:00.496937 | orchestrator | =============================================================================== 2025-09-17 00:43:00.496948 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 25.23s 2025-09-17 00:43:00.496959 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.95s 2025-09-17 00:43:00.496970 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.55s 2025-09-17 00:43:00.496980 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.25s 2025-09-17 00:43:00.496991 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.11s 2025-09-17 00:43:00.497002 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.77s 2025-09-17 00:43:00.497022 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.40s 2025-09-17 00:43:00.497033 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.32s 2025-09-17 00:43:00.497044 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.18s 2025-09-17 00:43:00.497155 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.76s 2025-09-17 00:43:00.497167 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.59s 2025-09-17 00:43:00.497185 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.58s 2025-09-17 00:43:00.497196 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.52s 2025-09-17 00:43:00.497207 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.28s 2025-09-17 00:43:00.497217 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.17s 2025-09-17 00:43:00.497228 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.12s 2025-09-17 00:43:00.497239 | orchestrator | 2025-09-17 00:43:00 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:00.497250 | orchestrator | 2025-09-17 00:43:00 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:43:00.497261 | orchestrator | 2025-09-17 00:43:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:03.526097 | orchestrator | 2025-09-17 00:43:03 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:03.527984 | orchestrator | 2025-09-17 00:43:03 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:03.529933 | orchestrator | 2025-09-17 00:43:03 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:03.532552 | orchestrator | 2025-09-17 00:43:03 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state STARTED 2025-09-17 00:43:03.532597 | orchestrator | 2025-09-17 00:43:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:06.571923 | orchestrator | 2025-09-17 00:43:06 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:06.573370 | orchestrator | 2025-09-17 00:43:06 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:06.574907 | orchestrator | 2025-09-17 00:43:06 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:06.576491 | orchestrator | 2025-09-17 00:43:06 | INFO  | Task 5c1ae259-03ad-41ce-ab3e-e3eff150e21e is in state SUCCESS 2025-09-17 00:43:06.576517 | orchestrator | 2025-09-17 00:43:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:09.628701 | orchestrator | 2025-09-17 00:43:09 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:09.631216 | orchestrator | 2025-09-17 00:43:09 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:09.633860 | orchestrator | 2025-09-17 00:43:09 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:09.634282 | orchestrator | 2025-09-17 00:43:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:12.682787 | orchestrator | 2025-09-17 00:43:12 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:12.683941 | orchestrator | 2025-09-17 00:43:12 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:12.685193 | orchestrator | 2025-09-17 00:43:12 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:12.685226 | orchestrator | 2025-09-17 00:43:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:15.729514 | orchestrator | 2025-09-17 00:43:15 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:15.729947 | orchestrator | 2025-09-17 00:43:15 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:15.732915 | orchestrator | 2025-09-17 00:43:15 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:15.732994 | orchestrator | 2025-09-17 00:43:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:18.778731 | orchestrator | 2025-09-17 00:43:18 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:18.780353 | orchestrator | 2025-09-17 00:43:18 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:18.781364 | orchestrator | 2025-09-17 00:43:18 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:18.781388 | orchestrator | 2025-09-17 00:43:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:21.829496 | orchestrator | 2025-09-17 00:43:21 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:21.832397 | orchestrator | 2025-09-17 00:43:21 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:21.835323 | orchestrator | 2025-09-17 00:43:21 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:21.835349 | orchestrator | 2025-09-17 00:43:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:24.884648 | orchestrator | 2025-09-17 00:43:24 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:24.885719 | orchestrator | 2025-09-17 00:43:24 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:24.886980 | orchestrator | 2025-09-17 00:43:24 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:24.887588 | orchestrator | 2025-09-17 00:43:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:27.928007 | orchestrator | 2025-09-17 00:43:27 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:27.929450 | orchestrator | 2025-09-17 00:43:27 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:27.932387 | orchestrator | 2025-09-17 00:43:27 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:27.932838 | orchestrator | 2025-09-17 00:43:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:30.983093 | orchestrator | 2025-09-17 00:43:30 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:30.983799 | orchestrator | 2025-09-17 00:43:30 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:30.985821 | orchestrator | 2025-09-17 00:43:30 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:30.985845 | orchestrator | 2025-09-17 00:43:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:34.038853 | orchestrator | 2025-09-17 00:43:34 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:34.040737 | orchestrator | 2025-09-17 00:43:34 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:34.042867 | orchestrator | 2025-09-17 00:43:34 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:34.042925 | orchestrator | 2025-09-17 00:43:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:37.091120 | orchestrator | 2025-09-17 00:43:37 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:37.092069 | orchestrator | 2025-09-17 00:43:37 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:37.094653 | orchestrator | 2025-09-17 00:43:37 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:37.094678 | orchestrator | 2025-09-17 00:43:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:40.252032 | orchestrator | 2025-09-17 00:43:40 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:40.254384 | orchestrator | 2025-09-17 00:43:40 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:40.256061 | orchestrator | 2025-09-17 00:43:40 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:40.256356 | orchestrator | 2025-09-17 00:43:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:43.351060 | orchestrator | 2025-09-17 00:43:43 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:43.353638 | orchestrator | 2025-09-17 00:43:43 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:43.355133 | orchestrator | 2025-09-17 00:43:43 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:43.355150 | orchestrator | 2025-09-17 00:43:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:46.403571 | orchestrator | 2025-09-17 00:43:46 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:46.406589 | orchestrator | 2025-09-17 00:43:46 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:46.408170 | orchestrator | 2025-09-17 00:43:46 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:46.408289 | orchestrator | 2025-09-17 00:43:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:49.469681 | orchestrator | 2025-09-17 00:43:49 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:49.471093 | orchestrator | 2025-09-17 00:43:49 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:49.472362 | orchestrator | 2025-09-17 00:43:49 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:49.473741 | orchestrator | 2025-09-17 00:43:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:52.513321 | orchestrator | 2025-09-17 00:43:52 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:52.517217 | orchestrator | 2025-09-17 00:43:52 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:52.518940 | orchestrator | 2025-09-17 00:43:52 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:52.518967 | orchestrator | 2025-09-17 00:43:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:55.566508 | orchestrator | 2025-09-17 00:43:55 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:55.567629 | orchestrator | 2025-09-17 00:43:55 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:55.569289 | orchestrator | 2025-09-17 00:43:55 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:55.569318 | orchestrator | 2025-09-17 00:43:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:43:58.612730 | orchestrator | 2025-09-17 00:43:58 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:43:58.612827 | orchestrator | 2025-09-17 00:43:58 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:43:58.612841 | orchestrator | 2025-09-17 00:43:58 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:43:58.612852 | orchestrator | 2025-09-17 00:43:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:01.661450 | orchestrator | 2025-09-17 00:44:01 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:01.662740 | orchestrator | 2025-09-17 00:44:01 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:44:01.664412 | orchestrator | 2025-09-17 00:44:01 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:01.664433 | orchestrator | 2025-09-17 00:44:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:04.701951 | orchestrator | 2025-09-17 00:44:04 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:04.703268 | orchestrator | 2025-09-17 00:44:04 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:44:04.706121 | orchestrator | 2025-09-17 00:44:04 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:04.706487 | orchestrator | 2025-09-17 00:44:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:07.750395 | orchestrator | 2025-09-17 00:44:07 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:07.751511 | orchestrator | 2025-09-17 00:44:07 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state STARTED 2025-09-17 00:44:07.752779 | orchestrator | 2025-09-17 00:44:07 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:07.753024 | orchestrator | 2025-09-17 00:44:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:10.788353 | orchestrator | 2025-09-17 00:44:10 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:10.788559 | orchestrator | 2025-09-17 00:44:10 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state STARTED 2025-09-17 00:44:10.795148 | orchestrator | 2025-09-17 00:44:10 | INFO  | Task ba9295f4-6eaf-4d93-8c13-66dd4de16974 is in state SUCCESS 2025-09-17 00:44:10.795507 | orchestrator | 2025-09-17 00:44:10.795536 | orchestrator | 2025-09-17 00:44:10.795548 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-17 00:44:10.795561 | orchestrator | 2025-09-17 00:44:10.795573 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-17 00:44:10.795585 | orchestrator | Wednesday 17 September 2025 00:42:10 +0000 (0:00:00.253) 0:00:00.253 *** 2025-09-17 00:44:10.795597 | orchestrator | ok: [testbed-manager] 2025-09-17 00:44:10.795610 | orchestrator | 2025-09-17 00:44:10.795622 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-17 00:44:10.795634 | orchestrator | Wednesday 17 September 2025 00:42:11 +0000 (0:00:00.763) 0:00:01.017 *** 2025-09-17 00:44:10.795645 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-17 00:44:10.795657 | orchestrator | 2025-09-17 00:44:10.795669 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-17 00:44:10.795681 | orchestrator | Wednesday 17 September 2025 00:42:11 +0000 (0:00:00.490) 0:00:01.508 *** 2025-09-17 00:44:10.795692 | orchestrator | changed: [testbed-manager] 2025-09-17 00:44:10.795703 | orchestrator | 2025-09-17 00:44:10.795715 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-17 00:44:10.795726 | orchestrator | Wednesday 17 September 2025 00:42:13 +0000 (0:00:01.218) 0:00:02.727 *** 2025-09-17 00:44:10.795737 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-17 00:44:10.795749 | orchestrator | ok: [testbed-manager] 2025-09-17 00:44:10.795760 | orchestrator | 2025-09-17 00:44:10.795772 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-17 00:44:10.795783 | orchestrator | Wednesday 17 September 2025 00:42:58 +0000 (0:00:45.058) 0:00:47.785 *** 2025-09-17 00:44:10.795794 | orchestrator | changed: [testbed-manager] 2025-09-17 00:44:10.795806 | orchestrator | 2025-09-17 00:44:10.795817 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:44:10.795829 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:44:10.795842 | orchestrator | 2025-09-17 00:44:10.795853 | orchestrator | 2025-09-17 00:44:10.796196 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:44:10.796224 | orchestrator | Wednesday 17 September 2025 00:43:03 +0000 (0:00:05.841) 0:00:53.627 *** 2025-09-17 00:44:10.796237 | orchestrator | =============================================================================== 2025-09-17 00:44:10.796250 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 45.06s 2025-09-17 00:44:10.796262 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.84s 2025-09-17 00:44:10.796274 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.22s 2025-09-17 00:44:10.796287 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.76s 2025-09-17 00:44:10.796299 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.49s 2025-09-17 00:44:10.796311 | orchestrator | 2025-09-17 00:44:10.797182 | orchestrator | 2025-09-17 00:44:10.797327 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-17 00:44:10.798265 | orchestrator | 2025-09-17 00:44:10.798322 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-17 00:44:10.798337 | orchestrator | Wednesday 17 September 2025 00:41:45 +0000 (0:00:00.299) 0:00:00.299 *** 2025-09-17 00:44:10.798348 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:44:10.798361 | orchestrator | 2025-09-17 00:44:10.798373 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-17 00:44:10.798403 | orchestrator | Wednesday 17 September 2025 00:41:46 +0000 (0:00:01.078) 0:00:01.378 *** 2025-09-17 00:44:10.798415 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 00:44:10.798426 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 00:44:10.798437 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 00:44:10.798448 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 00:44:10.798459 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 00:44:10.798470 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 00:44:10.798480 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 00:44:10.798491 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 00:44:10.798501 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 00:44:10.798512 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 00:44:10.798523 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 00:44:10.798535 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 00:44:10.798545 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 00:44:10.798556 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 00:44:10.798566 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 00:44:10.798578 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 00:44:10.798589 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 00:44:10.798600 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 00:44:10.798610 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 00:44:10.798621 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 00:44:10.798632 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 00:44:10.798642 | orchestrator | 2025-09-17 00:44:10.798653 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-17 00:44:10.798669 | orchestrator | Wednesday 17 September 2025 00:41:50 +0000 (0:00:03.971) 0:00:05.350 *** 2025-09-17 00:44:10.798680 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:44:10.798693 | orchestrator | 2025-09-17 00:44:10.798704 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-17 00:44:10.798714 | orchestrator | Wednesday 17 September 2025 00:41:51 +0000 (0:00:01.191) 0:00:06.542 *** 2025-09-17 00:44:10.798730 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.798747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.798813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.798828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.798839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.798850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.798862 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.798879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.798920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.798969 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.798982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.798994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.799005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.799018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.799047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.799058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.799077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.799107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.799120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.799131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.799142 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.799153 | orchestrator | 2025-09-17 00:44:10.799164 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-17 00:44:10.799175 | orchestrator | Wednesday 17 September 2025 00:41:56 +0000 (0:00:04.495) 0:00:11.037 *** 2025-09-17 00:44:10.799187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799233 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:44:10.799244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799307 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:44:10.799318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799357 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799375 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799386 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799397 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:44:10.799422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799502 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:44:10.799513 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:44:10.799524 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:44:10.799535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799578 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:44:10.799589 | orchestrator | 2025-09-17 00:44:10.799600 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-17 00:44:10.799611 | orchestrator | Wednesday 17 September 2025 00:41:58 +0000 (0:00:01.743) 0:00:12.780 *** 2025-09-17 00:44:10.799622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799668 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799680 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799732 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:44:10.799744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799766 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:44:10.799777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799821 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:44:10.799832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.799944 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:44:10.799955 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:44:10.799966 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:44:10.799982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 00:44:10.799994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.800005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.800016 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:44:10.800027 | orchestrator | 2025-09-17 00:44:10.800038 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-17 00:44:10.800049 | orchestrator | Wednesday 17 September 2025 00:41:59 +0000 (0:00:01.726) 0:00:14.506 *** 2025-09-17 00:44:10.800060 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:44:10.800070 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:44:10.800081 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:44:10.800092 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:44:10.800103 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:44:10.800120 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:44:10.800132 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:44:10.800142 | orchestrator | 2025-09-17 00:44:10.800153 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-17 00:44:10.800164 | orchestrator | Wednesday 17 September 2025 00:42:00 +0000 (0:00:00.600) 0:00:15.107 *** 2025-09-17 00:44:10.800175 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:44:10.800185 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:44:10.800196 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:44:10.800207 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:44:10.800217 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:44:10.800228 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:44:10.800238 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:44:10.800249 | orchestrator | 2025-09-17 00:44:10.800260 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-17 00:44:10.800270 | orchestrator | Wednesday 17 September 2025 00:42:01 +0000 (0:00:01.076) 0:00:16.183 *** 2025-09-17 00:44:10.800282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.800301 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.800313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.800329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.800341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.800369 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.800381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.800410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800437 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800547 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800558 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.800569 | orchestrator | 2025-09-17 00:44:10.800580 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-17 00:44:10.800591 | orchestrator | Wednesday 17 September 2025 00:42:07 +0000 (0:00:06.076) 0:00:22.259 *** 2025-09-17 00:44:10.800602 | orchestrator | [WARNING]: Skipped 2025-09-17 00:44:10.800613 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-17 00:44:10.800624 | orchestrator | to this access issue: 2025-09-17 00:44:10.800635 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-17 00:44:10.800645 | orchestrator | directory 2025-09-17 00:44:10.800656 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 00:44:10.800673 | orchestrator | 2025-09-17 00:44:10.800685 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-17 00:44:10.800702 | orchestrator | Wednesday 17 September 2025 00:42:09 +0000 (0:00:01.788) 0:00:24.048 *** 2025-09-17 00:44:10.800712 | orchestrator | [WARNING]: Skipped 2025-09-17 00:44:10.800723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-17 00:44:10.800740 | orchestrator | to this access issue: 2025-09-17 00:44:10.800751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-17 00:44:10.800762 | orchestrator | directory 2025-09-17 00:44:10.800773 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 00:44:10.800784 | orchestrator | 2025-09-17 00:44:10.800794 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-17 00:44:10.800805 | orchestrator | Wednesday 17 September 2025 00:42:10 +0000 (0:00:01.145) 0:00:25.194 *** 2025-09-17 00:44:10.800816 | orchestrator | [WARNING]: Skipped 2025-09-17 00:44:10.800827 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-17 00:44:10.800837 | orchestrator | to this access issue: 2025-09-17 00:44:10.800848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-17 00:44:10.800859 | orchestrator | directory 2025-09-17 00:44:10.800869 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 00:44:10.800880 | orchestrator | 2025-09-17 00:44:10.800919 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-17 00:44:10.800931 | orchestrator | Wednesday 17 September 2025 00:42:11 +0000 (0:00:00.862) 0:00:26.056 *** 2025-09-17 00:44:10.800941 | orchestrator | [WARNING]: Skipped 2025-09-17 00:44:10.800952 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-17 00:44:10.800963 | orchestrator | to this access issue: 2025-09-17 00:44:10.800974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-17 00:44:10.800984 | orchestrator | directory 2025-09-17 00:44:10.800995 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 00:44:10.801006 | orchestrator | 2025-09-17 00:44:10.801017 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-17 00:44:10.801027 | orchestrator | Wednesday 17 September 2025 00:42:12 +0000 (0:00:00.942) 0:00:26.999 *** 2025-09-17 00:44:10.801038 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:10.801048 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:44:10.801059 | orchestrator | changed: [testbed-manager] 2025-09-17 00:44:10.801070 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:44:10.801080 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:10.801091 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:10.801101 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:44:10.801112 | orchestrator | 2025-09-17 00:44:10.801123 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-17 00:44:10.801133 | orchestrator | Wednesday 17 September 2025 00:42:17 +0000 (0:00:04.680) 0:00:31.680 *** 2025-09-17 00:44:10.801144 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 00:44:10.801155 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 00:44:10.801165 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 00:44:10.801176 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 00:44:10.801187 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 00:44:10.801198 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 00:44:10.801208 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 00:44:10.801219 | orchestrator | 2025-09-17 00:44:10.801230 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-17 00:44:10.801251 | orchestrator | Wednesday 17 September 2025 00:42:19 +0000 (0:00:02.561) 0:00:34.242 *** 2025-09-17 00:44:10.801262 | orchestrator | changed: [testbed-manager] 2025-09-17 00:44:10.801273 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:10.801284 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:10.801295 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:10.801305 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:44:10.801316 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:44:10.801326 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:44:10.801337 | orchestrator | 2025-09-17 00:44:10.801347 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-17 00:44:10.801358 | orchestrator | Wednesday 17 September 2025 00:42:22 +0000 (0:00:02.444) 0:00:36.686 *** 2025-09-17 00:44:10.801369 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.801387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.801399 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.801411 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.801422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.801433 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.801454 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.801470 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.801481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.801499 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.801510 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.801522 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.801533 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.801545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.801562 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.801574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.801585 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.801608 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.801626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:44:10.801637 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.801649 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.801665 | orchestrator | 2025-09-17 00:44:10.801677 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-17 00:44:10.801687 | orchestrator | Wednesday 17 September 2025 00:42:25 +0000 (0:00:03.416) 0:00:40.102 *** 2025-09-17 00:44:10.801698 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 00:44:10.801709 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 00:44:10.801720 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 00:44:10.801730 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 00:44:10.801741 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 00:44:10.801752 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 00:44:10.801762 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 00:44:10.801773 | orchestrator | 2025-09-17 00:44:10.801783 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-17 00:44:10.801799 | orchestrator | Wednesday 17 September 2025 00:42:28 +0000 (0:00:02.738) 0:00:42.840 *** 2025-09-17 00:44:10.801810 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 00:44:10.801821 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 00:44:10.801831 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 00:44:10.801842 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 00:44:10.801853 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 00:44:10.801863 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 00:44:10.801874 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 00:44:10.801884 | orchestrator | 2025-09-17 00:44:10.801943 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-17 00:44:10.801955 | orchestrator | Wednesday 17 September 2025 00:42:31 +0000 (0:00:03.350) 0:00:46.191 *** 2025-09-17 00:44:10.801966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.801987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.801998 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.802060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.802074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802090 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802100 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.802110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.802127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 00:44:10.802164 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802188 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802224 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802250 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802271 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:44:10.802280 | orchestrator | 2025-09-17 00:44:10.802290 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-17 00:44:10.802300 | orchestrator | Wednesday 17 September 2025 00:42:35 +0000 (0:00:04.031) 0:00:50.223 *** 2025-09-17 00:44:10.802309 | orchestrator | changed: [testbed-manager] 2025-09-17 00:44:10.802319 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:10.802335 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:10.802345 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:10.802354 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:44:10.802364 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:44:10.802373 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:44:10.802382 | orchestrator | 2025-09-17 00:44:10.802392 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-17 00:44:10.802401 | orchestrator | Wednesday 17 September 2025 00:42:38 +0000 (0:00:02.671) 0:00:52.894 *** 2025-09-17 00:44:10.802415 | orchestrator | changed: [testbed-manager] 2025-09-17 00:44:10.802424 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:10.802434 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:10.802443 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:10.802452 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:44:10.802462 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:44:10.802471 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:44:10.802481 | orchestrator | 2025-09-17 00:44:10.802490 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 00:44:10.802500 | orchestrator | Wednesday 17 September 2025 00:42:39 +0000 (0:00:01.123) 0:00:54.018 *** 2025-09-17 00:44:10.802509 | orchestrator | 2025-09-17 00:44:10.802519 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 00:44:10.802528 | orchestrator | Wednesday 17 September 2025 00:42:39 +0000 (0:00:00.066) 0:00:54.085 *** 2025-09-17 00:44:10.802538 | orchestrator | 2025-09-17 00:44:10.802547 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 00:44:10.802557 | orchestrator | Wednesday 17 September 2025 00:42:39 +0000 (0:00:00.063) 0:00:54.148 *** 2025-09-17 00:44:10.802566 | orchestrator | 2025-09-17 00:44:10.802575 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 00:44:10.802585 | orchestrator | Wednesday 17 September 2025 00:42:39 +0000 (0:00:00.068) 0:00:54.216 *** 2025-09-17 00:44:10.802601 | orchestrator | 2025-09-17 00:44:10.802610 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 00:44:10.802620 | orchestrator | Wednesday 17 September 2025 00:42:39 +0000 (0:00:00.179) 0:00:54.396 *** 2025-09-17 00:44:10.802629 | orchestrator | 2025-09-17 00:44:10.802639 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 00:44:10.802648 | orchestrator | Wednesday 17 September 2025 00:42:39 +0000 (0:00:00.063) 0:00:54.460 *** 2025-09-17 00:44:10.802658 | orchestrator | 2025-09-17 00:44:10.802667 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 00:44:10.802676 | orchestrator | Wednesday 17 September 2025 00:42:39 +0000 (0:00:00.073) 0:00:54.534 *** 2025-09-17 00:44:10.802686 | orchestrator | 2025-09-17 00:44:10.802695 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-17 00:44:10.802710 | orchestrator | Wednesday 17 September 2025 00:42:39 +0000 (0:00:00.081) 0:00:54.616 *** 2025-09-17 00:44:10.802720 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:10.802729 | orchestrator | changed: [testbed-manager] 2025-09-17 00:44:10.802739 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:44:10.802748 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:44:10.802757 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:44:10.802767 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:10.802776 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:10.802786 | orchestrator | 2025-09-17 00:44:10.802795 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-17 00:44:10.802805 | orchestrator | Wednesday 17 September 2025 00:43:15 +0000 (0:00:35.879) 0:01:30.495 *** 2025-09-17 00:44:10.802814 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:10.802823 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:44:10.802833 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:44:10.802842 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:44:10.802851 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:10.802861 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:10.802870 | orchestrator | changed: [testbed-manager] 2025-09-17 00:44:10.802879 | orchestrator | 2025-09-17 00:44:10.802904 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-17 00:44:10.802915 | orchestrator | Wednesday 17 September 2025 00:43:59 +0000 (0:00:43.656) 0:02:14.152 *** 2025-09-17 00:44:10.802924 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:44:10.802933 | orchestrator | ok: [testbed-manager] 2025-09-17 00:44:10.802943 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:44:10.802952 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:44:10.802962 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:44:10.802971 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:44:10.802980 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:44:10.802989 | orchestrator | 2025-09-17 00:44:10.802999 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-17 00:44:10.803008 | orchestrator | Wednesday 17 September 2025 00:44:02 +0000 (0:00:02.695) 0:02:16.847 *** 2025-09-17 00:44:10.803018 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:10.803027 | orchestrator | changed: [testbed-manager] 2025-09-17 00:44:10.803036 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:10.803046 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:44:10.803055 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:10.803064 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:44:10.803073 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:44:10.803083 | orchestrator | 2025-09-17 00:44:10.803092 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:44:10.803103 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 00:44:10.803112 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 00:44:10.803128 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 00:44:10.803138 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 00:44:10.803147 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 00:44:10.803157 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 00:44:10.803170 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 00:44:10.803180 | orchestrator | 2025-09-17 00:44:10.803189 | orchestrator | 2025-09-17 00:44:10.803199 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:44:10.803208 | orchestrator | Wednesday 17 September 2025 00:44:07 +0000 (0:00:05.367) 0:02:22.215 *** 2025-09-17 00:44:10.803218 | orchestrator | =============================================================================== 2025-09-17 00:44:10.803227 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 43.66s 2025-09-17 00:44:10.803237 | orchestrator | common : Restart fluentd container ------------------------------------- 35.88s 2025-09-17 00:44:10.803246 | orchestrator | common : Copying over config.json files for services -------------------- 6.08s 2025-09-17 00:44:10.803255 | orchestrator | common : Restart cron container ----------------------------------------- 5.37s 2025-09-17 00:44:10.803265 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.68s 2025-09-17 00:44:10.803274 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.50s 2025-09-17 00:44:10.803283 | orchestrator | common : Check common containers ---------------------------------------- 4.03s 2025-09-17 00:44:10.803293 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.97s 2025-09-17 00:44:10.803302 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.42s 2025-09-17 00:44:10.803312 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.35s 2025-09-17 00:44:10.803321 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.74s 2025-09-17 00:44:10.803330 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.70s 2025-09-17 00:44:10.803339 | orchestrator | common : Creating log volume -------------------------------------------- 2.67s 2025-09-17 00:44:10.803349 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.56s 2025-09-17 00:44:10.803363 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.44s 2025-09-17 00:44:10.803373 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.79s 2025-09-17 00:44:10.803383 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.74s 2025-09-17 00:44:10.803392 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.73s 2025-09-17 00:44:10.803402 | orchestrator | common : include_tasks -------------------------------------------------- 1.19s 2025-09-17 00:44:10.803411 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.15s 2025-09-17 00:44:10.803420 | orchestrator | 2025-09-17 00:44:10 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:10.803430 | orchestrator | 2025-09-17 00:44:10 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:10.803440 | orchestrator | 2025-09-17 00:44:10 | INFO  | Task 604786f5-aac3-4f7d-a0d8-df417df3359b is in state STARTED 2025-09-17 00:44:10.803449 | orchestrator | 2025-09-17 00:44:10 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:10.803467 | orchestrator | 2025-09-17 00:44:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:13.839658 | orchestrator | 2025-09-17 00:44:13 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:13.839954 | orchestrator | 2025-09-17 00:44:13 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state STARTED 2025-09-17 00:44:13.840503 | orchestrator | 2025-09-17 00:44:13 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:13.841172 | orchestrator | 2025-09-17 00:44:13 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:13.841768 | orchestrator | 2025-09-17 00:44:13 | INFO  | Task 604786f5-aac3-4f7d-a0d8-df417df3359b is in state STARTED 2025-09-17 00:44:13.842356 | orchestrator | 2025-09-17 00:44:13 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:13.842736 | orchestrator | 2025-09-17 00:44:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:16.887603 | orchestrator | 2025-09-17 00:44:16 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:16.888004 | orchestrator | 2025-09-17 00:44:16 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state STARTED 2025-09-17 00:44:16.888630 | orchestrator | 2025-09-17 00:44:16 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:16.889700 | orchestrator | 2025-09-17 00:44:16 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:16.891298 | orchestrator | 2025-09-17 00:44:16 | INFO  | Task 604786f5-aac3-4f7d-a0d8-df417df3359b is in state STARTED 2025-09-17 00:44:16.894136 | orchestrator | 2025-09-17 00:44:16 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:16.894378 | orchestrator | 2025-09-17 00:44:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:19.957291 | orchestrator | 2025-09-17 00:44:19 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:19.957399 | orchestrator | 2025-09-17 00:44:19 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state STARTED 2025-09-17 00:44:19.957413 | orchestrator | 2025-09-17 00:44:19 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:19.957425 | orchestrator | 2025-09-17 00:44:19 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:19.957436 | orchestrator | 2025-09-17 00:44:19 | INFO  | Task 604786f5-aac3-4f7d-a0d8-df417df3359b is in state STARTED 2025-09-17 00:44:19.957447 | orchestrator | 2025-09-17 00:44:19 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:19.957458 | orchestrator | 2025-09-17 00:44:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:22.970718 | orchestrator | 2025-09-17 00:44:22 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:22.972997 | orchestrator | 2025-09-17 00:44:22 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state STARTED 2025-09-17 00:44:22.973439 | orchestrator | 2025-09-17 00:44:22 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:22.973950 | orchestrator | 2025-09-17 00:44:22 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:22.974376 | orchestrator | 2025-09-17 00:44:22 | INFO  | Task 604786f5-aac3-4f7d-a0d8-df417df3359b is in state STARTED 2025-09-17 00:44:22.974821 | orchestrator | 2025-09-17 00:44:22 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:22.974957 | orchestrator | 2025-09-17 00:44:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:26.014227 | orchestrator | 2025-09-17 00:44:26 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:26.016290 | orchestrator | 2025-09-17 00:44:26 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state STARTED 2025-09-17 00:44:26.016328 | orchestrator | 2025-09-17 00:44:26 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:26.016341 | orchestrator | 2025-09-17 00:44:26 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:26.023821 | orchestrator | 2025-09-17 00:44:26 | INFO  | Task 604786f5-aac3-4f7d-a0d8-df417df3359b is in state STARTED 2025-09-17 00:44:26.025067 | orchestrator | 2025-09-17 00:44:26 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:26.025141 | orchestrator | 2025-09-17 00:44:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:29.052628 | orchestrator | 2025-09-17 00:44:29 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:29.053362 | orchestrator | 2025-09-17 00:44:29 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state STARTED 2025-09-17 00:44:29.054654 | orchestrator | 2025-09-17 00:44:29 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:29.057028 | orchestrator | 2025-09-17 00:44:29 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:29.058130 | orchestrator | 2025-09-17 00:44:29 | INFO  | Task 604786f5-aac3-4f7d-a0d8-df417df3359b is in state SUCCESS 2025-09-17 00:44:29.059226 | orchestrator | 2025-09-17 00:44:29 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:29.060131 | orchestrator | 2025-09-17 00:44:29 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:29.060443 | orchestrator | 2025-09-17 00:44:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:32.094276 | orchestrator | 2025-09-17 00:44:32 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:32.094395 | orchestrator | 2025-09-17 00:44:32 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state STARTED 2025-09-17 00:44:32.098251 | orchestrator | 2025-09-17 00:44:32 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:32.099370 | orchestrator | 2025-09-17 00:44:32 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:32.099399 | orchestrator | 2025-09-17 00:44:32 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:32.101214 | orchestrator | 2025-09-17 00:44:32 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:32.101241 | orchestrator | 2025-09-17 00:44:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:35.142691 | orchestrator | 2025-09-17 00:44:35 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:35.144109 | orchestrator | 2025-09-17 00:44:35 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state STARTED 2025-09-17 00:44:35.145565 | orchestrator | 2025-09-17 00:44:35 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:35.146965 | orchestrator | 2025-09-17 00:44:35 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:35.148386 | orchestrator | 2025-09-17 00:44:35 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:35.149919 | orchestrator | 2025-09-17 00:44:35 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:35.149943 | orchestrator | 2025-09-17 00:44:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:38.197410 | orchestrator | 2025-09-17 00:44:38 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:38.200267 | orchestrator | 2025-09-17 00:44:38 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state STARTED 2025-09-17 00:44:38.202729 | orchestrator | 2025-09-17 00:44:38 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:38.203367 | orchestrator | 2025-09-17 00:44:38 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:38.206424 | orchestrator | 2025-09-17 00:44:38 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:38.207130 | orchestrator | 2025-09-17 00:44:38 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:38.207159 | orchestrator | 2025-09-17 00:44:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:41.279364 | orchestrator | 2025-09-17 00:44:41.279464 | orchestrator | 2025-09-17 00:44:41.279481 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:44:41.279494 | orchestrator | 2025-09-17 00:44:41.279505 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:44:41.279517 | orchestrator | Wednesday 17 September 2025 00:44:15 +0000 (0:00:00.289) 0:00:00.289 *** 2025-09-17 00:44:41.279528 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:44:41.279539 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:44:41.279550 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:44:41.279560 | orchestrator | 2025-09-17 00:44:41.279571 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:44:41.279582 | orchestrator | Wednesday 17 September 2025 00:44:15 +0000 (0:00:00.324) 0:00:00.613 *** 2025-09-17 00:44:41.279593 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-17 00:44:41.279605 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-17 00:44:41.279615 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-17 00:44:41.279626 | orchestrator | 2025-09-17 00:44:41.279636 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-17 00:44:41.279647 | orchestrator | 2025-09-17 00:44:41.279657 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-17 00:44:41.279668 | orchestrator | Wednesday 17 September 2025 00:44:15 +0000 (0:00:00.345) 0:00:00.958 *** 2025-09-17 00:44:41.279679 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:44:41.279690 | orchestrator | 2025-09-17 00:44:41.279700 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-17 00:44:41.279711 | orchestrator | Wednesday 17 September 2025 00:44:16 +0000 (0:00:00.522) 0:00:01.481 *** 2025-09-17 00:44:41.279721 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-17 00:44:41.279732 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-17 00:44:41.279742 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-17 00:44:41.279753 | orchestrator | 2025-09-17 00:44:41.279763 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-17 00:44:41.279774 | orchestrator | Wednesday 17 September 2025 00:44:17 +0000 (0:00:00.682) 0:00:02.163 *** 2025-09-17 00:44:41.279784 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-17 00:44:41.279795 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-17 00:44:41.279805 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-17 00:44:41.279816 | orchestrator | 2025-09-17 00:44:41.279826 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-17 00:44:41.279858 | orchestrator | Wednesday 17 September 2025 00:44:18 +0000 (0:00:01.791) 0:00:03.955 *** 2025-09-17 00:44:41.279870 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:41.279882 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:41.279931 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:41.279944 | orchestrator | 2025-09-17 00:44:41.279957 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-17 00:44:41.279969 | orchestrator | Wednesday 17 September 2025 00:44:20 +0000 (0:00:01.706) 0:00:05.661 *** 2025-09-17 00:44:41.279981 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:41.279994 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:41.280014 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:41.280027 | orchestrator | 2025-09-17 00:44:41.280039 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:44:41.280051 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:44:41.280065 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:44:41.280078 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:44:41.280090 | orchestrator | 2025-09-17 00:44:41.280101 | orchestrator | 2025-09-17 00:44:41.280113 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:44:41.280126 | orchestrator | Wednesday 17 September 2025 00:44:27 +0000 (0:00:07.235) 0:00:12.897 *** 2025-09-17 00:44:41.280138 | orchestrator | =============================================================================== 2025-09-17 00:44:41.280150 | orchestrator | memcached : Restart memcached container --------------------------------- 7.24s 2025-09-17 00:44:41.280162 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.79s 2025-09-17 00:44:41.280175 | orchestrator | memcached : Check memcached container ----------------------------------- 1.71s 2025-09-17 00:44:41.280187 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.68s 2025-09-17 00:44:41.280198 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.52s 2025-09-17 00:44:41.280210 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-09-17 00:44:41.280223 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-17 00:44:41.280235 | orchestrator | 2025-09-17 00:44:41.280246 | orchestrator | 2025-09-17 00:44:41.280256 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:44:41.280267 | orchestrator | 2025-09-17 00:44:41.280278 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:44:41.280288 | orchestrator | Wednesday 17 September 2025 00:44:14 +0000 (0:00:00.300) 0:00:00.300 *** 2025-09-17 00:44:41.280299 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:44:41.280310 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:44:41.280320 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:44:41.280331 | orchestrator | 2025-09-17 00:44:41.280342 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:44:41.280369 | orchestrator | Wednesday 17 September 2025 00:44:14 +0000 (0:00:00.431) 0:00:00.731 *** 2025-09-17 00:44:41.280381 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-17 00:44:41.280392 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-17 00:44:41.280402 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-17 00:44:41.280413 | orchestrator | 2025-09-17 00:44:41.280423 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-17 00:44:41.280434 | orchestrator | 2025-09-17 00:44:41.280445 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-17 00:44:41.280455 | orchestrator | Wednesday 17 September 2025 00:44:15 +0000 (0:00:00.602) 0:00:01.334 *** 2025-09-17 00:44:41.280473 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:44:41.280484 | orchestrator | 2025-09-17 00:44:41.280495 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-17 00:44:41.280505 | orchestrator | Wednesday 17 September 2025 00:44:15 +0000 (0:00:00.693) 0:00:02.027 *** 2025-09-17 00:44:41.280520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280617 | orchestrator | 2025-09-17 00:44:41.280628 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-17 00:44:41.280640 | orchestrator | Wednesday 17 September 2025 00:44:17 +0000 (0:00:01.315) 0:00:03.343 *** 2025-09-17 00:44:41.280651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280738 | orchestrator | 2025-09-17 00:44:41.280749 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-17 00:44:41.280760 | orchestrator | Wednesday 17 September 2025 00:44:19 +0000 (0:00:02.599) 0:00:05.942 *** 2025-09-17 00:44:41.280771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280849 | orchestrator | 2025-09-17 00:44:41.280865 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-17 00:44:41.280876 | orchestrator | Wednesday 17 September 2025 00:44:22 +0000 (0:00:02.390) 0:00:08.332 *** 2025-09-17 00:44:41.280906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 00:44:41.280986 | orchestrator | 2025-09-17 00:44:41.280997 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-17 00:44:41.281008 | orchestrator | Wednesday 17 September 2025 00:44:23 +0000 (0:00:01.586) 0:00:09.918 *** 2025-09-17 00:44:41.281019 | orchestrator | 2025-09-17 00:44:41.281029 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-17 00:44:41.281046 | orchestrator | Wednesday 17 September 2025 00:44:23 +0000 (0:00:00.080) 0:00:09.999 *** 2025-09-17 00:44:41.281057 | orchestrator | 2025-09-17 00:44:41.281068 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-17 00:44:41.281079 | orchestrator | Wednesday 17 September 2025 00:44:23 +0000 (0:00:00.067) 0:00:10.066 *** 2025-09-17 00:44:41.281089 | orchestrator | 2025-09-17 00:44:41.281100 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-17 00:44:41.281111 | orchestrator | Wednesday 17 September 2025 00:44:23 +0000 (0:00:00.065) 0:00:10.132 *** 2025-09-17 00:44:41.281121 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:41.281132 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:41.281143 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:41.281153 | orchestrator | 2025-09-17 00:44:41.281164 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-17 00:44:41.281175 | orchestrator | Wednesday 17 September 2025 00:44:31 +0000 (0:00:07.835) 0:00:17.967 *** 2025-09-17 00:44:41.281186 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:44:41.281196 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:44:41.281207 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:44:41.281217 | orchestrator | 2025-09-17 00:44:41.281228 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:44:41.281239 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:44:41.281250 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:44:41.281261 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:44:41.281271 | orchestrator | 2025-09-17 00:44:41.281282 | orchestrator | 2025-09-17 00:44:41.281293 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:44:41.281303 | orchestrator | Wednesday 17 September 2025 00:44:39 +0000 (0:00:07.898) 0:00:25.866 *** 2025-09-17 00:44:41.281314 | orchestrator | =============================================================================== 2025-09-17 00:44:41.281325 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.90s 2025-09-17 00:44:41.281336 | orchestrator | redis : Restart redis container ----------------------------------------- 7.84s 2025-09-17 00:44:41.281346 | orchestrator | redis : Copying over default config.json files -------------------------- 2.60s 2025-09-17 00:44:41.281357 | orchestrator | redis : Copying over redis config files --------------------------------- 2.39s 2025-09-17 00:44:41.281368 | orchestrator | redis : Check redis containers ------------------------------------------ 1.59s 2025-09-17 00:44:41.281379 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.32s 2025-09-17 00:44:41.281389 | orchestrator | redis : include_tasks --------------------------------------------------- 0.69s 2025-09-17 00:44:41.281400 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-09-17 00:44:41.281415 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-09-17 00:44:41.281426 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2025-09-17 00:44:41.281442 | orchestrator | 2025-09-17 00:44:41 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:41.281453 | orchestrator | 2025-09-17 00:44:41 | INFO  | Task da2927f4-6dd6-4150-bdd5-6f3f2bed7433 is in state SUCCESS 2025-09-17 00:44:41.281464 | orchestrator | 2025-09-17 00:44:41 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:41.281475 | orchestrator | 2025-09-17 00:44:41 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:41.281486 | orchestrator | 2025-09-17 00:44:41 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:41.281497 | orchestrator | 2025-09-17 00:44:41 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:41.281585 | orchestrator | 2025-09-17 00:44:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:44.312323 | orchestrator | 2025-09-17 00:44:44 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:44.312946 | orchestrator | 2025-09-17 00:44:44 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:44.316212 | orchestrator | 2025-09-17 00:44:44 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:44.316702 | orchestrator | 2025-09-17 00:44:44 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:44.317965 | orchestrator | 2025-09-17 00:44:44 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:44.317990 | orchestrator | 2025-09-17 00:44:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:47.353347 | orchestrator | 2025-09-17 00:44:47 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:47.353452 | orchestrator | 2025-09-17 00:44:47 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:47.353467 | orchestrator | 2025-09-17 00:44:47 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:47.353479 | orchestrator | 2025-09-17 00:44:47 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:47.353491 | orchestrator | 2025-09-17 00:44:47 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:47.353502 | orchestrator | 2025-09-17 00:44:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:50.382125 | orchestrator | 2025-09-17 00:44:50 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:50.382237 | orchestrator | 2025-09-17 00:44:50 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:50.382924 | orchestrator | 2025-09-17 00:44:50 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:50.384502 | orchestrator | 2025-09-17 00:44:50 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:50.388953 | orchestrator | 2025-09-17 00:44:50 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:50.388995 | orchestrator | 2025-09-17 00:44:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:53.453575 | orchestrator | 2025-09-17 00:44:53 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:53.453674 | orchestrator | 2025-09-17 00:44:53 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:53.461142 | orchestrator | 2025-09-17 00:44:53 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:53.461224 | orchestrator | 2025-09-17 00:44:53 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:53.461239 | orchestrator | 2025-09-17 00:44:53 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:53.461251 | orchestrator | 2025-09-17 00:44:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:56.489401 | orchestrator | 2025-09-17 00:44:56 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:56.489510 | orchestrator | 2025-09-17 00:44:56 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:56.489526 | orchestrator | 2025-09-17 00:44:56 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:56.490219 | orchestrator | 2025-09-17 00:44:56 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:56.490632 | orchestrator | 2025-09-17 00:44:56 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:56.490653 | orchestrator | 2025-09-17 00:44:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:44:59.525318 | orchestrator | 2025-09-17 00:44:59 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:44:59.525421 | orchestrator | 2025-09-17 00:44:59 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:44:59.525436 | orchestrator | 2025-09-17 00:44:59 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:44:59.525448 | orchestrator | 2025-09-17 00:44:59 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:44:59.525459 | orchestrator | 2025-09-17 00:44:59 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:44:59.525471 | orchestrator | 2025-09-17 00:44:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:02.568472 | orchestrator | 2025-09-17 00:45:02 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:45:02.568765 | orchestrator | 2025-09-17 00:45:02 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:02.569520 | orchestrator | 2025-09-17 00:45:02 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:45:02.570307 | orchestrator | 2025-09-17 00:45:02 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:02.571375 | orchestrator | 2025-09-17 00:45:02 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:02.571412 | orchestrator | 2025-09-17 00:45:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:05.596012 | orchestrator | 2025-09-17 00:45:05 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:45:05.596758 | orchestrator | 2025-09-17 00:45:05 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:05.598393 | orchestrator | 2025-09-17 00:45:05 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state STARTED 2025-09-17 00:45:05.600195 | orchestrator | 2025-09-17 00:45:05 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:05.604440 | orchestrator | 2025-09-17 00:45:05 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:05.604515 | orchestrator | 2025-09-17 00:45:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:08.637261 | orchestrator | 2025-09-17 00:45:08 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:45:08.639586 | orchestrator | 2025-09-17 00:45:08 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:08.640663 | orchestrator | 2025-09-17 00:45:08.640701 | orchestrator | 2025-09-17 00:45:08 | INFO  | Task 7204d9e3-9dee-49b2-9ca4-53f61d9b5251 is in state SUCCESS 2025-09-17 00:45:08.642010 | orchestrator | 2025-09-17 00:45:08.642084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:45:08.642096 | orchestrator | 2025-09-17 00:45:08.642107 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:45:08.642118 | orchestrator | Wednesday 17 September 2025 00:44:13 +0000 (0:00:00.376) 0:00:00.376 *** 2025-09-17 00:45:08.642129 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:08.642140 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:08.642150 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:08.642161 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:45:08.642171 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:45:08.642182 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:45:08.642192 | orchestrator | 2025-09-17 00:45:08.642203 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:45:08.642218 | orchestrator | Wednesday 17 September 2025 00:44:14 +0000 (0:00:00.837) 0:00:01.214 *** 2025-09-17 00:45:08.642230 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 00:45:08.642241 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 00:45:08.642252 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 00:45:08.642262 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 00:45:08.642273 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 00:45:08.642284 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 00:45:08.642294 | orchestrator | 2025-09-17 00:45:08.642305 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-17 00:45:08.642316 | orchestrator | 2025-09-17 00:45:08.642326 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-17 00:45:08.642350 | orchestrator | Wednesday 17 September 2025 00:44:15 +0000 (0:00:00.723) 0:00:01.937 *** 2025-09-17 00:45:08.642362 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:45:08.642373 | orchestrator | 2025-09-17 00:45:08.642384 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-17 00:45:08.642395 | orchestrator | Wednesday 17 September 2025 00:44:16 +0000 (0:00:00.980) 0:00:02.917 *** 2025-09-17 00:45:08.642406 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-17 00:45:08.642416 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-17 00:45:08.642427 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-17 00:45:08.642438 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-17 00:45:08.642448 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-17 00:45:08.642459 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-17 00:45:08.642470 | orchestrator | 2025-09-17 00:45:08.642480 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-17 00:45:08.642491 | orchestrator | Wednesday 17 September 2025 00:44:17 +0000 (0:00:01.211) 0:00:04.129 *** 2025-09-17 00:45:08.642501 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-17 00:45:08.642561 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-17 00:45:08.642574 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-17 00:45:08.642680 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-17 00:45:08.642694 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-17 00:45:08.642722 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-17 00:45:08.642735 | orchestrator | 2025-09-17 00:45:08.642747 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-17 00:45:08.642760 | orchestrator | Wednesday 17 September 2025 00:44:19 +0000 (0:00:01.606) 0:00:05.736 *** 2025-09-17 00:45:08.642772 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-17 00:45:08.642785 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:08.642798 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-17 00:45:08.642809 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:08.642820 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-17 00:45:08.642831 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:08.642841 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-17 00:45:08.642852 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:08.642863 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-17 00:45:08.642873 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:08.642884 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-17 00:45:08.642921 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:08.642932 | orchestrator | 2025-09-17 00:45:08.642943 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-17 00:45:08.642954 | orchestrator | Wednesday 17 September 2025 00:44:20 +0000 (0:00:01.254) 0:00:06.990 *** 2025-09-17 00:45:08.642965 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:08.642976 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:08.642987 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:08.642997 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:08.643008 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:08.643019 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:08.643029 | orchestrator | 2025-09-17 00:45:08.643040 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-17 00:45:08.643051 | orchestrator | Wednesday 17 September 2025 00:44:21 +0000 (0:00:00.861) 0:00:07.852 *** 2025-09-17 00:45:08.643082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643134 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643177 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643264 | orchestrator | 2025-09-17 00:45:08.643275 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-17 00:45:08.643286 | orchestrator | Wednesday 17 September 2025 00:44:22 +0000 (0:00:01.458) 0:00:09.311 *** 2025-09-17 00:45:08.643298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643473 | orchestrator | 2025-09-17 00:45:08.643484 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-17 00:45:08.643495 | orchestrator | Wednesday 17 September 2025 00:44:25 +0000 (0:00:02.555) 0:00:11.866 *** 2025-09-17 00:45:08.643506 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:08.643517 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:08.643528 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:08.643539 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:08.643555 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:08.643566 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:08.643577 | orchestrator | 2025-09-17 00:45:08.643588 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-17 00:45:08.643598 | orchestrator | Wednesday 17 September 2025 00:44:26 +0000 (0:00:00.934) 0:00:12.800 *** 2025-09-17 00:45:08.643614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 00:45:08.643785 | orchestrator | 2025-09-17 00:45:08.643796 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 00:45:08.643807 | orchestrator | Wednesday 17 September 2025 00:44:28 +0000 (0:00:02.083) 0:00:14.884 *** 2025-09-17 00:45:08.643818 | orchestrator | 2025-09-17 00:45:08.643833 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 00:45:08.643844 | orchestrator | Wednesday 17 September 2025 00:44:28 +0000 (0:00:00.251) 0:00:15.136 *** 2025-09-17 00:45:08.643855 | orchestrator | 2025-09-17 00:45:08.643866 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 00:45:08.643877 | orchestrator | Wednesday 17 September 2025 00:44:28 +0000 (0:00:00.166) 0:00:15.303 *** 2025-09-17 00:45:08.643903 | orchestrator | 2025-09-17 00:45:08.643914 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 00:45:08.643925 | orchestrator | Wednesday 17 September 2025 00:44:29 +0000 (0:00:00.123) 0:00:15.426 *** 2025-09-17 00:45:08.643936 | orchestrator | 2025-09-17 00:45:08.643946 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 00:45:08.643957 | orchestrator | Wednesday 17 September 2025 00:44:29 +0000 (0:00:00.164) 0:00:15.591 *** 2025-09-17 00:45:08.643968 | orchestrator | 2025-09-17 00:45:08.643978 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 00:45:08.643989 | orchestrator | Wednesday 17 September 2025 00:44:29 +0000 (0:00:00.159) 0:00:15.751 *** 2025-09-17 00:45:08.644000 | orchestrator | 2025-09-17 00:45:08.644010 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-17 00:45:08.644021 | orchestrator | Wednesday 17 September 2025 00:44:29 +0000 (0:00:00.210) 0:00:15.961 *** 2025-09-17 00:45:08.644032 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:08.644043 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:08.644053 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:08.644064 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:08.644075 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:08.644085 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:08.644096 | orchestrator | 2025-09-17 00:45:08.644107 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-17 00:45:08.644118 | orchestrator | Wednesday 17 September 2025 00:44:40 +0000 (0:00:10.573) 0:00:26.535 *** 2025-09-17 00:45:08.644129 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:08.644139 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:45:08.644150 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:08.644161 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:08.644171 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:45:08.644182 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:45:08.644193 | orchestrator | 2025-09-17 00:45:08.644203 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-17 00:45:08.644214 | orchestrator | Wednesday 17 September 2025 00:44:41 +0000 (0:00:01.364) 0:00:27.900 *** 2025-09-17 00:45:08.644225 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:08.644236 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:08.644252 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:08.644263 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:08.644274 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:08.644284 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:08.644295 | orchestrator | 2025-09-17 00:45:08.644306 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-17 00:45:08.644316 | orchestrator | Wednesday 17 September 2025 00:44:45 +0000 (0:00:03.582) 0:00:31.482 *** 2025-09-17 00:45:08.644327 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-17 00:45:08.644338 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-17 00:45:08.644349 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-17 00:45:08.644360 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-17 00:45:08.644371 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-17 00:45:08.644387 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-17 00:45:08.644398 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-17 00:45:08.644409 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-17 00:45:08.644420 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-17 00:45:08.644431 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-17 00:45:08.644441 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-17 00:45:08.644452 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-17 00:45:08.644463 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 00:45:08.644474 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 00:45:08.644485 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 00:45:08.644495 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 00:45:08.644510 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 00:45:08.644522 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 00:45:08.644533 | orchestrator | 2025-09-17 00:45:08.644544 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-17 00:45:08.644554 | orchestrator | Wednesday 17 September 2025 00:44:52 +0000 (0:00:07.380) 0:00:38.862 *** 2025-09-17 00:45:08.644565 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-17 00:45:08.644576 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:08.644587 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-17 00:45:08.644597 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:08.644608 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-17 00:45:08.644619 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:08.644630 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-17 00:45:08.644640 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-17 00:45:08.644657 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-17 00:45:08.644668 | orchestrator | 2025-09-17 00:45:08.644678 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-17 00:45:08.644689 | orchestrator | Wednesday 17 September 2025 00:44:55 +0000 (0:00:02.542) 0:00:41.404 *** 2025-09-17 00:45:08.644700 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-17 00:45:08.644711 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:08.644722 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-17 00:45:08.644733 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:08.644743 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-17 00:45:08.644754 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:08.644765 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-17 00:45:08.644776 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-17 00:45:08.644786 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-17 00:45:08.644797 | orchestrator | 2025-09-17 00:45:08.644808 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-17 00:45:08.644819 | orchestrator | Wednesday 17 September 2025 00:44:58 +0000 (0:00:03.673) 0:00:45.078 *** 2025-09-17 00:45:08.644830 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:08.644840 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:08.644852 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:08.644862 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:08.644873 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:08.644884 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:08.644908 | orchestrator | 2025-09-17 00:45:08.644919 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:45:08.644931 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 00:45:08.644942 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 00:45:08.644953 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 00:45:08.644964 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 00:45:08.644975 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 00:45:08.644992 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 00:45:08.645003 | orchestrator | 2025-09-17 00:45:08.645014 | orchestrator | 2025-09-17 00:45:08.645025 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:45:08.645036 | orchestrator | Wednesday 17 September 2025 00:45:07 +0000 (0:00:08.534) 0:00:53.612 *** 2025-09-17 00:45:08.645047 | orchestrator | =============================================================================== 2025-09-17 00:45:08.645058 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 12.12s 2025-09-17 00:45:08.645068 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.57s 2025-09-17 00:45:08.645079 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.38s 2025-09-17 00:45:08.645090 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.67s 2025-09-17 00:45:08.645101 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.56s 2025-09-17 00:45:08.645111 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.54s 2025-09-17 00:45:08.645128 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.08s 2025-09-17 00:45:08.645139 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.61s 2025-09-17 00:45:08.645150 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.46s 2025-09-17 00:45:08.645161 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.36s 2025-09-17 00:45:08.645172 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.25s 2025-09-17 00:45:08.645183 | orchestrator | module-load : Load modules ---------------------------------------------- 1.21s 2025-09-17 00:45:08.645197 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.08s 2025-09-17 00:45:08.645209 | orchestrator | openvswitch : include_tasks --------------------------------------------- 0.98s 2025-09-17 00:45:08.645219 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.93s 2025-09-17 00:45:08.645230 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.86s 2025-09-17 00:45:08.645241 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2025-09-17 00:45:08.645252 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-09-17 00:45:08.645263 | orchestrator | 2025-09-17 00:45:08 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:08.645364 | orchestrator | 2025-09-17 00:45:08 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:08.645378 | orchestrator | 2025-09-17 00:45:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:11.672653 | orchestrator | 2025-09-17 00:45:11 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:45:11.673850 | orchestrator | 2025-09-17 00:45:11 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:11.677864 | orchestrator | 2025-09-17 00:45:11 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:11.677917 | orchestrator | 2025-09-17 00:45:11 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:11.677929 | orchestrator | 2025-09-17 00:45:11 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:11.677941 | orchestrator | 2025-09-17 00:45:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:14.788076 | orchestrator | 2025-09-17 00:45:14 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:45:14.788664 | orchestrator | 2025-09-17 00:45:14 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:14.790114 | orchestrator | 2025-09-17 00:45:14 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:14.790822 | orchestrator | 2025-09-17 00:45:14 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:14.791703 | orchestrator | 2025-09-17 00:45:14 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:14.792003 | orchestrator | 2025-09-17 00:45:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:17.853372 | orchestrator | 2025-09-17 00:45:17 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:45:17.853596 | orchestrator | 2025-09-17 00:45:17 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:17.856961 | orchestrator | 2025-09-17 00:45:17 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:17.858655 | orchestrator | 2025-09-17 00:45:17 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:17.859074 | orchestrator | 2025-09-17 00:45:17 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:17.859126 | orchestrator | 2025-09-17 00:45:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:20.895105 | orchestrator | 2025-09-17 00:45:20 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:45:20.895217 | orchestrator | 2025-09-17 00:45:20 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:20.897247 | orchestrator | 2025-09-17 00:45:20 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:20.897342 | orchestrator | 2025-09-17 00:45:20 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:20.897356 | orchestrator | 2025-09-17 00:45:20 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:20.897368 | orchestrator | 2025-09-17 00:45:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:24.048118 | orchestrator | 2025-09-17 00:45:24 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:45:24.048240 | orchestrator | 2025-09-17 00:45:24 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:24.048575 | orchestrator | 2025-09-17 00:45:24 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:24.050194 | orchestrator | 2025-09-17 00:45:24 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:24.050631 | orchestrator | 2025-09-17 00:45:24 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:24.050702 | orchestrator | 2025-09-17 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:27.273549 | orchestrator | 2025-09-17 00:45:27 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state STARTED 2025-09-17 00:45:27.273658 | orchestrator | 2025-09-17 00:45:27 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:27.273673 | orchestrator | 2025-09-17 00:45:27 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:27.273685 | orchestrator | 2025-09-17 00:45:27 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:27.273696 | orchestrator | 2025-09-17 00:45:27 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:27.273707 | orchestrator | 2025-09-17 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:30.503364 | orchestrator | 2025-09-17 00:45:30.503484 | orchestrator | 2025-09-17 00:45:30.503500 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-17 00:45:30.503528 | orchestrator | 2025-09-17 00:45:30.503541 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-17 00:45:30.503553 | orchestrator | Wednesday 17 September 2025 00:41:46 +0000 (0:00:00.208) 0:00:00.208 *** 2025-09-17 00:45:30.503565 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:45:30.503577 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:45:30.503589 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:45:30.503600 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.503611 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.503622 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.503633 | orchestrator | 2025-09-17 00:45:30.503644 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-17 00:45:30.503655 | orchestrator | Wednesday 17 September 2025 00:41:46 +0000 (0:00:00.570) 0:00:00.778 *** 2025-09-17 00:45:30.503666 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.503678 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.503689 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.503722 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.503734 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.503744 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.503755 | orchestrator | 2025-09-17 00:45:30.503766 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-17 00:45:30.503777 | orchestrator | Wednesday 17 September 2025 00:41:47 +0000 (0:00:00.530) 0:00:01.309 *** 2025-09-17 00:45:30.503788 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.503798 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.503809 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.503820 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.503830 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.503841 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.503852 | orchestrator | 2025-09-17 00:45:30.503863 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-17 00:45:30.503874 | orchestrator | Wednesday 17 September 2025 00:41:47 +0000 (0:00:00.608) 0:00:01.917 *** 2025-09-17 00:45:30.503913 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.503927 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:30.503940 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:30.503952 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.503964 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.503976 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:30.503989 | orchestrator | 2025-09-17 00:45:30.504001 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-17 00:45:30.504014 | orchestrator | Wednesday 17 September 2025 00:41:49 +0000 (0:00:01.607) 0:00:03.525 *** 2025-09-17 00:45:30.504026 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:30.504038 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:30.504051 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:30.504063 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.504075 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.504088 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.504101 | orchestrator | 2025-09-17 00:45:30.504113 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-17 00:45:30.504124 | orchestrator | Wednesday 17 September 2025 00:41:50 +0000 (0:00:01.002) 0:00:04.527 *** 2025-09-17 00:45:30.504135 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:30.504145 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:30.504156 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:30.504167 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.504177 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.504188 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.504199 | orchestrator | 2025-09-17 00:45:30.504209 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-17 00:45:30.504221 | orchestrator | Wednesday 17 September 2025 00:41:52 +0000 (0:00:02.022) 0:00:06.549 *** 2025-09-17 00:45:30.504232 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.504242 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.504253 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.504264 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.504274 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.504285 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.504295 | orchestrator | 2025-09-17 00:45:30.504306 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-17 00:45:30.504317 | orchestrator | Wednesday 17 September 2025 00:41:53 +0000 (0:00:00.752) 0:00:07.302 *** 2025-09-17 00:45:30.504328 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.504338 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.504349 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.504360 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.504370 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.504381 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.504404 | orchestrator | 2025-09-17 00:45:30.504425 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-17 00:45:30.504437 | orchestrator | Wednesday 17 September 2025 00:41:54 +0000 (0:00:01.210) 0:00:08.512 *** 2025-09-17 00:45:30.504448 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 00:45:30.504458 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 00:45:30.504469 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.504480 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 00:45:30.504491 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 00:45:30.504502 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.504512 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 00:45:30.504523 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 00:45:30.504534 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.504544 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 00:45:30.504573 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 00:45:30.504585 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 00:45:30.504596 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.504607 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 00:45:30.504618 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.504628 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 00:45:30.504639 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 00:45:30.504650 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.504661 | orchestrator | 2025-09-17 00:45:30.504671 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-17 00:45:30.504682 | orchestrator | Wednesday 17 September 2025 00:41:55 +0000 (0:00:00.524) 0:00:09.037 *** 2025-09-17 00:45:30.504693 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.504703 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.504714 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.504725 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.504735 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.504746 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.504757 | orchestrator | 2025-09-17 00:45:30.504768 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-17 00:45:30.504779 | orchestrator | Wednesday 17 September 2025 00:41:55 +0000 (0:00:00.935) 0:00:09.973 *** 2025-09-17 00:45:30.504790 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:45:30.504801 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:45:30.504812 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:45:30.504822 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.504833 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.504843 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.504854 | orchestrator | 2025-09-17 00:45:30.504865 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-17 00:45:30.504876 | orchestrator | Wednesday 17 September 2025 00:41:57 +0000 (0:00:01.189) 0:00:11.163 *** 2025-09-17 00:45:30.504902 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.504914 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.504924 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.504935 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:30.504946 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:30.504956 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:30.504966 | orchestrator | 2025-09-17 00:45:30.504977 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-17 00:45:30.504995 | orchestrator | Wednesday 17 September 2025 00:42:03 +0000 (0:00:05.862) 0:00:17.025 *** 2025-09-17 00:45:30.505005 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.505016 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.505026 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.505037 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.505048 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.505058 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.505069 | orchestrator | 2025-09-17 00:45:30.505080 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-17 00:45:30.505090 | orchestrator | Wednesday 17 September 2025 00:42:04 +0000 (0:00:01.646) 0:00:18.671 *** 2025-09-17 00:45:30.505101 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.505111 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.505122 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.505132 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.505143 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.505153 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.505164 | orchestrator | 2025-09-17 00:45:30.505175 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-17 00:45:30.505188 | orchestrator | Wednesday 17 September 2025 00:42:07 +0000 (0:00:02.500) 0:00:21.171 *** 2025-09-17 00:45:30.505199 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:45:30.505211 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:45:30.505222 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:45:30.505234 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.505245 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.505256 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.505267 | orchestrator | 2025-09-17 00:45:30.505279 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-17 00:45:30.505290 | orchestrator | Wednesday 17 September 2025 00:42:08 +0000 (0:00:01.591) 0:00:22.762 *** 2025-09-17 00:45:30.505302 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-17 00:45:30.505314 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-17 00:45:30.505325 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-17 00:45:30.505341 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-17 00:45:30.505353 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-17 00:45:30.505364 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-17 00:45:30.505375 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-17 00:45:30.505387 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-17 00:45:30.505398 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-17 00:45:30.505410 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-17 00:45:30.505421 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-17 00:45:30.505432 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-17 00:45:30.505444 | orchestrator | 2025-09-17 00:45:30.505455 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-17 00:45:30.505466 | orchestrator | Wednesday 17 September 2025 00:42:10 +0000 (0:00:01.962) 0:00:24.725 *** 2025-09-17 00:45:30.505478 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:30.505489 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:30.505501 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:30.505512 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.505523 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.505534 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.505545 | orchestrator | 2025-09-17 00:45:30.505563 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-17 00:45:30.505575 | orchestrator | 2025-09-17 00:45:30.505586 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-17 00:45:30.505596 | orchestrator | Wednesday 17 September 2025 00:42:12 +0000 (0:00:02.020) 0:00:26.746 *** 2025-09-17 00:45:30.505613 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.505624 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.505634 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.505645 | orchestrator | 2025-09-17 00:45:30.505656 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-17 00:45:30.505666 | orchestrator | Wednesday 17 September 2025 00:42:13 +0000 (0:00:01.172) 0:00:27.918 *** 2025-09-17 00:45:30.505677 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.505687 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.505698 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.505709 | orchestrator | 2025-09-17 00:45:30.505719 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-17 00:45:30.505730 | orchestrator | Wednesday 17 September 2025 00:42:16 +0000 (0:00:02.201) 0:00:30.119 *** 2025-09-17 00:45:30.505740 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.505751 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.505761 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.505772 | orchestrator | 2025-09-17 00:45:30.505783 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-17 00:45:30.505793 | orchestrator | Wednesday 17 September 2025 00:42:16 +0000 (0:00:00.853) 0:00:30.973 *** 2025-09-17 00:45:30.505804 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.505814 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.505825 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.505835 | orchestrator | 2025-09-17 00:45:30.505846 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-17 00:45:30.505857 | orchestrator | Wednesday 17 September 2025 00:42:17 +0000 (0:00:00.964) 0:00:31.937 *** 2025-09-17 00:45:30.505868 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.505879 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.505904 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.505916 | orchestrator | 2025-09-17 00:45:30.505928 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-17 00:45:30.505939 | orchestrator | Wednesday 17 September 2025 00:42:18 +0000 (0:00:00.357) 0:00:32.295 *** 2025-09-17 00:45:30.505950 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.505962 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.505973 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.505984 | orchestrator | 2025-09-17 00:45:30.505996 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-17 00:45:30.506007 | orchestrator | Wednesday 17 September 2025 00:42:19 +0000 (0:00:00.753) 0:00:33.049 *** 2025-09-17 00:45:30.506069 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.506084 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.506094 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.506105 | orchestrator | 2025-09-17 00:45:30.506116 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-17 00:45:30.506127 | orchestrator | Wednesday 17 September 2025 00:42:20 +0000 (0:00:01.270) 0:00:34.319 *** 2025-09-17 00:45:30.506138 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:45:30.506148 | orchestrator | 2025-09-17 00:45:30.506159 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-17 00:45:30.506170 | orchestrator | Wednesday 17 September 2025 00:42:20 +0000 (0:00:00.654) 0:00:34.973 *** 2025-09-17 00:45:30.506181 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.506191 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.506202 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.506213 | orchestrator | 2025-09-17 00:45:30.506223 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-17 00:45:30.506234 | orchestrator | Wednesday 17 September 2025 00:42:23 +0000 (0:00:02.593) 0:00:37.566 *** 2025-09-17 00:45:30.506245 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.506255 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.506273 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.506283 | orchestrator | 2025-09-17 00:45:30.506294 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-17 00:45:30.506305 | orchestrator | Wednesday 17 September 2025 00:42:24 +0000 (0:00:00.831) 0:00:38.397 *** 2025-09-17 00:45:30.506315 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.506326 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.506337 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.506347 | orchestrator | 2025-09-17 00:45:30.506358 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-17 00:45:30.506368 | orchestrator | Wednesday 17 September 2025 00:42:25 +0000 (0:00:01.084) 0:00:39.482 *** 2025-09-17 00:45:30.506379 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.506395 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.506406 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.506416 | orchestrator | 2025-09-17 00:45:30.506427 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-17 00:45:30.506438 | orchestrator | Wednesday 17 September 2025 00:42:27 +0000 (0:00:02.227) 0:00:41.710 *** 2025-09-17 00:45:30.506449 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.506459 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.506470 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.506480 | orchestrator | 2025-09-17 00:45:30.506491 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-17 00:45:30.506501 | orchestrator | Wednesday 17 September 2025 00:42:28 +0000 (0:00:00.528) 0:00:42.238 *** 2025-09-17 00:45:30.506512 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.506523 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.506533 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.506544 | orchestrator | 2025-09-17 00:45:30.506554 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-17 00:45:30.506565 | orchestrator | Wednesday 17 September 2025 00:42:28 +0000 (0:00:00.439) 0:00:42.677 *** 2025-09-17 00:45:30.506576 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.506587 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.506597 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.506608 | orchestrator | 2025-09-17 00:45:30.506625 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-17 00:45:30.506637 | orchestrator | Wednesday 17 September 2025 00:42:30 +0000 (0:00:02.236) 0:00:44.914 *** 2025-09-17 00:45:30.506648 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-17 00:45:30.506660 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-17 00:45:30.506671 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-17 00:45:30.506682 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-17 00:45:30.506693 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-17 00:45:30.506704 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-17 00:45:30.506715 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-17 00:45:30.506725 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-17 00:45:30.506736 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-17 00:45:30.506753 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-17 00:45:30.506764 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-17 00:45:30.506775 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-17 00:45:30.506786 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-17 00:45:30.506797 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-17 00:45:30.506807 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-17 00:45:30.506818 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.506829 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.506840 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.506851 | orchestrator | 2025-09-17 00:45:30.506861 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-17 00:45:30.506872 | orchestrator | Wednesday 17 September 2025 00:43:26 +0000 (0:00:55.721) 0:01:40.635 *** 2025-09-17 00:45:30.506883 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.506923 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.506934 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.506945 | orchestrator | 2025-09-17 00:45:30.506956 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-17 00:45:30.506966 | orchestrator | Wednesday 17 September 2025 00:43:27 +0000 (0:00:00.398) 0:01:41.034 *** 2025-09-17 00:45:30.506977 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.506988 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.506999 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.507009 | orchestrator | 2025-09-17 00:45:30.507020 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-17 00:45:30.507031 | orchestrator | Wednesday 17 September 2025 00:43:28 +0000 (0:00:01.041) 0:01:42.076 *** 2025-09-17 00:45:30.507041 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.507056 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.507068 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.507078 | orchestrator | 2025-09-17 00:45:30.507089 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-17 00:45:30.507100 | orchestrator | Wednesday 17 September 2025 00:43:29 +0000 (0:00:01.100) 0:01:43.177 *** 2025-09-17 00:45:30.507111 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.507121 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.507132 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.507142 | orchestrator | 2025-09-17 00:45:30.507153 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-17 00:45:30.507164 | orchestrator | Wednesday 17 September 2025 00:43:56 +0000 (0:00:27.665) 0:02:10.843 *** 2025-09-17 00:45:30.507174 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.507185 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.507196 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.507206 | orchestrator | 2025-09-17 00:45:30.507217 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-17 00:45:30.507228 | orchestrator | Wednesday 17 September 2025 00:43:57 +0000 (0:00:00.582) 0:02:11.425 *** 2025-09-17 00:45:30.507239 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.507249 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.507260 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.507270 | orchestrator | 2025-09-17 00:45:30.507287 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-17 00:45:30.507299 | orchestrator | Wednesday 17 September 2025 00:43:58 +0000 (0:00:00.605) 0:02:12.031 *** 2025-09-17 00:45:30.507316 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.507327 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.507337 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.507348 | orchestrator | 2025-09-17 00:45:30.507358 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-17 00:45:30.507369 | orchestrator | Wednesday 17 September 2025 00:43:58 +0000 (0:00:00.597) 0:02:12.628 *** 2025-09-17 00:45:30.507380 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.507391 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.507401 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.507412 | orchestrator | 2025-09-17 00:45:30.507422 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-17 00:45:30.507433 | orchestrator | Wednesday 17 September 2025 00:43:59 +0000 (0:00:00.889) 0:02:13.518 *** 2025-09-17 00:45:30.507444 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.507455 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.507465 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.507476 | orchestrator | 2025-09-17 00:45:30.507486 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-17 00:45:30.507497 | orchestrator | Wednesday 17 September 2025 00:43:59 +0000 (0:00:00.324) 0:02:13.842 *** 2025-09-17 00:45:30.507508 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.507519 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.507530 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.507540 | orchestrator | 2025-09-17 00:45:30.507551 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-17 00:45:30.507562 | orchestrator | Wednesday 17 September 2025 00:44:00 +0000 (0:00:00.784) 0:02:14.628 *** 2025-09-17 00:45:30.507572 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.507583 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.507594 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.507604 | orchestrator | 2025-09-17 00:45:30.507615 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-17 00:45:30.507626 | orchestrator | Wednesday 17 September 2025 00:44:01 +0000 (0:00:00.690) 0:02:15.318 *** 2025-09-17 00:45:30.507636 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.507647 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.507658 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.507668 | orchestrator | 2025-09-17 00:45:30.507679 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-17 00:45:30.507690 | orchestrator | Wednesday 17 September 2025 00:44:02 +0000 (0:00:01.116) 0:02:16.434 *** 2025-09-17 00:45:30.507700 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:45:30.507711 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:45:30.507722 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:45:30.507732 | orchestrator | 2025-09-17 00:45:30.507743 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-17 00:45:30.507754 | orchestrator | Wednesday 17 September 2025 00:44:03 +0000 (0:00:00.829) 0:02:17.264 *** 2025-09-17 00:45:30.507764 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.507775 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.507785 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.507796 | orchestrator | 2025-09-17 00:45:30.507807 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-17 00:45:30.507817 | orchestrator | Wednesday 17 September 2025 00:44:03 +0000 (0:00:00.334) 0:02:17.599 *** 2025-09-17 00:45:30.507828 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.507839 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.507849 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.507860 | orchestrator | 2025-09-17 00:45:30.507871 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-17 00:45:30.507882 | orchestrator | Wednesday 17 September 2025 00:44:03 +0000 (0:00:00.331) 0:02:17.930 *** 2025-09-17 00:45:30.507911 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.507928 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.507939 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.507950 | orchestrator | 2025-09-17 00:45:30.507960 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-17 00:45:30.507971 | orchestrator | Wednesday 17 September 2025 00:44:04 +0000 (0:00:00.841) 0:02:18.771 *** 2025-09-17 00:45:30.507982 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.507993 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.508003 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.508014 | orchestrator | 2025-09-17 00:45:30.508025 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-17 00:45:30.508035 | orchestrator | Wednesday 17 September 2025 00:44:05 +0000 (0:00:00.709) 0:02:19.481 *** 2025-09-17 00:45:30.508047 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-17 00:45:30.508057 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-17 00:45:30.508068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-17 00:45:30.508079 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-17 00:45:30.508090 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-17 00:45:30.508100 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-17 00:45:30.508111 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-17 00:45:30.508122 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-17 00:45:30.508133 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-17 00:45:30.508148 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-17 00:45:30.508160 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-17 00:45:30.508171 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-17 00:45:30.508181 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-17 00:45:30.508192 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-17 00:45:30.508203 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-17 00:45:30.508213 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-17 00:45:30.508224 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-17 00:45:30.508235 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-17 00:45:30.508246 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-17 00:45:30.508256 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-17 00:45:30.508267 | orchestrator | 2025-09-17 00:45:30.508277 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-17 00:45:30.508288 | orchestrator | 2025-09-17 00:45:30.508299 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-17 00:45:30.508309 | orchestrator | Wednesday 17 September 2025 00:44:08 +0000 (0:00:03.063) 0:02:22.544 *** 2025-09-17 00:45:30.508320 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:45:30.508331 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:45:30.508341 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:45:30.508352 | orchestrator | 2025-09-17 00:45:30.508363 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-17 00:45:30.508379 | orchestrator | Wednesday 17 September 2025 00:44:09 +0000 (0:00:00.490) 0:02:23.034 *** 2025-09-17 00:45:30.508390 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:45:30.508401 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:45:30.508411 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:45:30.508422 | orchestrator | 2025-09-17 00:45:30.508432 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-17 00:45:30.508443 | orchestrator | Wednesday 17 September 2025 00:44:09 +0000 (0:00:00.681) 0:02:23.715 *** 2025-09-17 00:45:30.508454 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:45:30.508465 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:45:30.508475 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:45:30.508486 | orchestrator | 2025-09-17 00:45:30.509163 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-17 00:45:30.509184 | orchestrator | Wednesday 17 September 2025 00:44:10 +0000 (0:00:00.308) 0:02:24.024 *** 2025-09-17 00:45:30.509194 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:45:30.509204 | orchestrator | 2025-09-17 00:45:30.509214 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-17 00:45:30.509223 | orchestrator | Wednesday 17 September 2025 00:44:10 +0000 (0:00:00.701) 0:02:24.726 *** 2025-09-17 00:45:30.509233 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.509242 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.509252 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.509261 | orchestrator | 2025-09-17 00:45:30.509271 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-17 00:45:30.509281 | orchestrator | Wednesday 17 September 2025 00:44:11 +0000 (0:00:00.393) 0:02:25.119 *** 2025-09-17 00:45:30.509290 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.509299 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.509309 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.509319 | orchestrator | 2025-09-17 00:45:30.509328 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-17 00:45:30.509337 | orchestrator | Wednesday 17 September 2025 00:44:11 +0000 (0:00:00.325) 0:02:25.444 *** 2025-09-17 00:45:30.509347 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.509356 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.509366 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.509375 | orchestrator | 2025-09-17 00:45:30.509384 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-17 00:45:30.509394 | orchestrator | Wednesday 17 September 2025 00:44:11 +0000 (0:00:00.380) 0:02:25.824 *** 2025-09-17 00:45:30.509404 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:45:30.509414 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:45:30.509423 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:45:30.509433 | orchestrator | 2025-09-17 00:45:30.509442 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-17 00:45:30.509451 | orchestrator | Wednesday 17 September 2025 00:44:12 +0000 (0:00:00.868) 0:02:26.693 *** 2025-09-17 00:45:30.509461 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:30.509470 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:30.509480 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:30.509489 | orchestrator | 2025-09-17 00:45:30.509498 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-17 00:45:30.509508 | orchestrator | Wednesday 17 September 2025 00:44:13 +0000 (0:00:01.074) 0:02:27.768 *** 2025-09-17 00:45:30.509517 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:30.509527 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:30.509536 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:30.509545 | orchestrator | 2025-09-17 00:45:30.509555 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-17 00:45:30.509564 | orchestrator | Wednesday 17 September 2025 00:44:15 +0000 (0:00:01.416) 0:02:29.184 *** 2025-09-17 00:45:30.509584 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:45:30.509593 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:45:30.509603 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:45:30.509612 | orchestrator | 2025-09-17 00:45:30.509630 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-17 00:45:30.509640 | orchestrator | 2025-09-17 00:45:30.509650 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-17 00:45:30.509660 | orchestrator | Wednesday 17 September 2025 00:44:27 +0000 (0:00:11.920) 0:02:41.105 *** 2025-09-17 00:45:30.509669 | orchestrator | ok: [testbed-manager] 2025-09-17 00:45:30.509679 | orchestrator | 2025-09-17 00:45:30.509688 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-17 00:45:30.509698 | orchestrator | Wednesday 17 September 2025 00:44:27 +0000 (0:00:00.799) 0:02:41.904 *** 2025-09-17 00:45:30.509707 | orchestrator | changed: [testbed-manager] 2025-09-17 00:45:30.509717 | orchestrator | 2025-09-17 00:45:30.509726 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-17 00:45:30.509736 | orchestrator | Wednesday 17 September 2025 00:44:28 +0000 (0:00:00.398) 0:02:42.302 *** 2025-09-17 00:45:30.509745 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-17 00:45:30.509755 | orchestrator | 2025-09-17 00:45:30.509764 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-17 00:45:30.509773 | orchestrator | Wednesday 17 September 2025 00:44:28 +0000 (0:00:00.569) 0:02:42.872 *** 2025-09-17 00:45:30.509788 | orchestrator | changed: [testbed-manager] 2025-09-17 00:45:30.509798 | orchestrator | 2025-09-17 00:45:30.509807 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-17 00:45:30.509817 | orchestrator | Wednesday 17 September 2025 00:44:29 +0000 (0:00:00.968) 0:02:43.840 *** 2025-09-17 00:45:30.509827 | orchestrator | changed: [testbed-manager] 2025-09-17 00:45:30.509836 | orchestrator | 2025-09-17 00:45:30.509845 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-17 00:45:30.509855 | orchestrator | Wednesday 17 September 2025 00:44:30 +0000 (0:00:00.631) 0:02:44.472 *** 2025-09-17 00:45:30.509864 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-17 00:45:30.509874 | orchestrator | 2025-09-17 00:45:30.509883 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-17 00:45:30.509910 | orchestrator | Wednesday 17 September 2025 00:44:32 +0000 (0:00:01.640) 0:02:46.113 *** 2025-09-17 00:45:30.509920 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-17 00:45:30.509930 | orchestrator | 2025-09-17 00:45:30.509939 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-17 00:45:30.509948 | orchestrator | Wednesday 17 September 2025 00:44:33 +0000 (0:00:01.068) 0:02:47.181 *** 2025-09-17 00:45:30.509958 | orchestrator | changed: [testbed-manager] 2025-09-17 00:45:30.509967 | orchestrator | 2025-09-17 00:45:30.509977 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-17 00:45:30.509986 | orchestrator | Wednesday 17 September 2025 00:44:33 +0000 (0:00:00.362) 0:02:47.543 *** 2025-09-17 00:45:30.509996 | orchestrator | changed: [testbed-manager] 2025-09-17 00:45:30.510005 | orchestrator | 2025-09-17 00:45:30.510015 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-17 00:45:30.510070 | orchestrator | 2025-09-17 00:45:30.510081 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-17 00:45:30.510090 | orchestrator | Wednesday 17 September 2025 00:44:34 +0000 (0:00:00.572) 0:02:48.116 *** 2025-09-17 00:45:30.510099 | orchestrator | ok: [testbed-manager] 2025-09-17 00:45:30.510109 | orchestrator | 2025-09-17 00:45:30.510118 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-17 00:45:30.510127 | orchestrator | Wednesday 17 September 2025 00:44:34 +0000 (0:00:00.126) 0:02:48.243 *** 2025-09-17 00:45:30.510137 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 00:45:30.510153 | orchestrator | 2025-09-17 00:45:30.510163 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-17 00:45:30.510172 | orchestrator | Wednesday 17 September 2025 00:44:34 +0000 (0:00:00.269) 0:02:48.512 *** 2025-09-17 00:45:30.510182 | orchestrator | ok: [testbed-manager] 2025-09-17 00:45:30.510191 | orchestrator | 2025-09-17 00:45:30.510201 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-17 00:45:30.510210 | orchestrator | Wednesday 17 September 2025 00:44:35 +0000 (0:00:00.819) 0:02:49.331 *** 2025-09-17 00:45:30.510220 | orchestrator | ok: [testbed-manager] 2025-09-17 00:45:30.510229 | orchestrator | 2025-09-17 00:45:30.510238 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-17 00:45:30.510248 | orchestrator | Wednesday 17 September 2025 00:44:36 +0000 (0:00:01.502) 0:02:50.834 *** 2025-09-17 00:45:30.510257 | orchestrator | changed: [testbed-manager] 2025-09-17 00:45:30.510267 | orchestrator | 2025-09-17 00:45:30.510276 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-17 00:45:30.510285 | orchestrator | Wednesday 17 September 2025 00:44:37 +0000 (0:00:00.744) 0:02:51.578 *** 2025-09-17 00:45:30.510295 | orchestrator | ok: [testbed-manager] 2025-09-17 00:45:30.510304 | orchestrator | 2025-09-17 00:45:30.510314 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-17 00:45:30.510323 | orchestrator | Wednesday 17 September 2025 00:44:37 +0000 (0:00:00.384) 0:02:51.963 *** 2025-09-17 00:45:30.510333 | orchestrator | changed: [testbed-manager] 2025-09-17 00:45:30.510342 | orchestrator | 2025-09-17 00:45:30.510351 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-17 00:45:30.510361 | orchestrator | Wednesday 17 September 2025 00:44:44 +0000 (0:00:06.570) 0:02:58.533 *** 2025-09-17 00:45:30.510370 | orchestrator | changed: [testbed-manager] 2025-09-17 00:45:30.510380 | orchestrator | 2025-09-17 00:45:30.510389 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-17 00:45:30.510399 | orchestrator | Wednesday 17 September 2025 00:44:58 +0000 (0:00:14.270) 0:03:12.804 *** 2025-09-17 00:45:30.510408 | orchestrator | ok: [testbed-manager] 2025-09-17 00:45:30.510418 | orchestrator | 2025-09-17 00:45:30.510427 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-17 00:45:30.510437 | orchestrator | 2025-09-17 00:45:30.510446 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-17 00:45:30.510462 | orchestrator | Wednesday 17 September 2025 00:44:59 +0000 (0:00:00.569) 0:03:13.373 *** 2025-09-17 00:45:30.510472 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.510481 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.510491 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.510500 | orchestrator | 2025-09-17 00:45:30.510510 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-17 00:45:30.510519 | orchestrator | Wednesday 17 September 2025 00:44:59 +0000 (0:00:00.243) 0:03:13.617 *** 2025-09-17 00:45:30.510529 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.510538 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.510548 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.510558 | orchestrator | 2025-09-17 00:45:30.510567 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-17 00:45:30.510577 | orchestrator | Wednesday 17 September 2025 00:44:59 +0000 (0:00:00.346) 0:03:13.963 *** 2025-09-17 00:45:30.510586 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:45:30.510596 | orchestrator | 2025-09-17 00:45:30.510605 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-17 00:45:30.510615 | orchestrator | Wednesday 17 September 2025 00:45:00 +0000 (0:00:00.627) 0:03:14.591 *** 2025-09-17 00:45:30.510624 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.510634 | orchestrator | 2025-09-17 00:45:30.510651 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-17 00:45:30.510667 | orchestrator | Wednesday 17 September 2025 00:45:00 +0000 (0:00:00.198) 0:03:14.789 *** 2025-09-17 00:45:30.510677 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.510686 | orchestrator | 2025-09-17 00:45:30.510696 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-17 00:45:30.510705 | orchestrator | Wednesday 17 September 2025 00:45:01 +0000 (0:00:00.234) 0:03:15.023 *** 2025-09-17 00:45:30.510715 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.510724 | orchestrator | 2025-09-17 00:45:30.510734 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-17 00:45:30.510743 | orchestrator | Wednesday 17 September 2025 00:45:01 +0000 (0:00:00.271) 0:03:15.295 *** 2025-09-17 00:45:30.510753 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.510762 | orchestrator | 2025-09-17 00:45:30.510772 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-17 00:45:30.510781 | orchestrator | Wednesday 17 September 2025 00:45:01 +0000 (0:00:00.389) 0:03:15.685 *** 2025-09-17 00:45:30.510791 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.510800 | orchestrator | 2025-09-17 00:45:30.510810 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-17 00:45:30.510819 | orchestrator | Wednesday 17 September 2025 00:45:01 +0000 (0:00:00.195) 0:03:15.880 *** 2025-09-17 00:45:30.510829 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.510838 | orchestrator | 2025-09-17 00:45:30.510848 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-17 00:45:30.510857 | orchestrator | Wednesday 17 September 2025 00:45:02 +0000 (0:00:00.191) 0:03:16.072 *** 2025-09-17 00:45:30.510867 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.510877 | orchestrator | 2025-09-17 00:45:30.510928 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-17 00:45:30.510941 | orchestrator | Wednesday 17 September 2025 00:45:02 +0000 (0:00:00.193) 0:03:16.265 *** 2025-09-17 00:45:30.510950 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.510959 | orchestrator | 2025-09-17 00:45:30.510969 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-17 00:45:30.510978 | orchestrator | Wednesday 17 September 2025 00:45:02 +0000 (0:00:00.169) 0:03:16.434 *** 2025-09-17 00:45:30.510988 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.510997 | orchestrator | 2025-09-17 00:45:30.511007 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-17 00:45:30.511016 | orchestrator | Wednesday 17 September 2025 00:45:02 +0000 (0:00:00.202) 0:03:16.637 *** 2025-09-17 00:45:30.511026 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-17 00:45:30.511035 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-17 00:45:30.511045 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511054 | orchestrator | 2025-09-17 00:45:30.511064 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-17 00:45:30.511073 | orchestrator | Wednesday 17 September 2025 00:45:03 +0000 (0:00:00.538) 0:03:17.175 *** 2025-09-17 00:45:30.511082 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511092 | orchestrator | 2025-09-17 00:45:30.511101 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-17 00:45:30.511111 | orchestrator | Wednesday 17 September 2025 00:45:03 +0000 (0:00:00.180) 0:03:17.356 *** 2025-09-17 00:45:30.511120 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511129 | orchestrator | 2025-09-17 00:45:30.511139 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-17 00:45:30.511149 | orchestrator | Wednesday 17 September 2025 00:45:03 +0000 (0:00:00.195) 0:03:17.551 *** 2025-09-17 00:45:30.511158 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511168 | orchestrator | 2025-09-17 00:45:30.511177 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-17 00:45:30.511187 | orchestrator | Wednesday 17 September 2025 00:45:03 +0000 (0:00:00.216) 0:03:17.767 *** 2025-09-17 00:45:30.511202 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511212 | orchestrator | 2025-09-17 00:45:30.511222 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-17 00:45:30.511231 | orchestrator | Wednesday 17 September 2025 00:45:03 +0000 (0:00:00.189) 0:03:17.957 *** 2025-09-17 00:45:30.511240 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511250 | orchestrator | 2025-09-17 00:45:30.511259 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-17 00:45:30.511269 | orchestrator | Wednesday 17 September 2025 00:45:04 +0000 (0:00:00.201) 0:03:18.158 *** 2025-09-17 00:45:30.511278 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511288 | orchestrator | 2025-09-17 00:45:30.511297 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-17 00:45:30.511313 | orchestrator | We2025-09-17 00:45:30 | INFO  | Task dd778f4a-b1ef-474a-a20e-e0fa1456ec57 is in state SUCCESS 2025-09-17 00:45:30.511324 | orchestrator | dnesday 17 September 2025 00:45:04 +0000 (0:00:00.181) 0:03:18.339 *** 2025-09-17 00:45:30.511333 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511342 | orchestrator | 2025-09-17 00:45:30.511352 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-17 00:45:30.511361 | orchestrator | Wednesday 17 September 2025 00:45:04 +0000 (0:00:00.204) 0:03:18.543 *** 2025-09-17 00:45:30.511371 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511380 | orchestrator | 2025-09-17 00:45:30.511389 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-17 00:45:30.511399 | orchestrator | Wednesday 17 September 2025 00:45:04 +0000 (0:00:00.178) 0:03:18.722 *** 2025-09-17 00:45:30.511408 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511417 | orchestrator | 2025-09-17 00:45:30.511427 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-17 00:45:30.511436 | orchestrator | Wednesday 17 September 2025 00:45:04 +0000 (0:00:00.189) 0:03:18.911 *** 2025-09-17 00:45:30.511446 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511454 | orchestrator | 2025-09-17 00:45:30.511465 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-17 00:45:30.511473 | orchestrator | Wednesday 17 September 2025 00:45:05 +0000 (0:00:00.187) 0:03:19.098 *** 2025-09-17 00:45:30.511481 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511488 | orchestrator | 2025-09-17 00:45:30.511496 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-17 00:45:30.511504 | orchestrator | Wednesday 17 September 2025 00:45:05 +0000 (0:00:00.210) 0:03:19.309 *** 2025-09-17 00:45:30.511511 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-17 00:45:30.511519 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-17 00:45:30.511527 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-17 00:45:30.511534 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-17 00:45:30.511542 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511550 | orchestrator | 2025-09-17 00:45:30.511557 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-17 00:45:30.511565 | orchestrator | Wednesday 17 September 2025 00:45:06 +0000 (0:00:00.930) 0:03:20.240 *** 2025-09-17 00:45:30.511573 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511580 | orchestrator | 2025-09-17 00:45:30.511588 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-17 00:45:30.511596 | orchestrator | Wednesday 17 September 2025 00:45:06 +0000 (0:00:00.175) 0:03:20.416 *** 2025-09-17 00:45:30.511603 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511611 | orchestrator | 2025-09-17 00:45:30.511619 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-17 00:45:30.511626 | orchestrator | Wednesday 17 September 2025 00:45:06 +0000 (0:00:00.221) 0:03:20.637 *** 2025-09-17 00:45:30.511634 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511646 | orchestrator | 2025-09-17 00:45:30.511654 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-17 00:45:30.511662 | orchestrator | Wednesday 17 September 2025 00:45:06 +0000 (0:00:00.306) 0:03:20.943 *** 2025-09-17 00:45:30.511670 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511678 | orchestrator | 2025-09-17 00:45:30.511685 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-17 00:45:30.511693 | orchestrator | Wednesday 17 September 2025 00:45:07 +0000 (0:00:00.302) 0:03:21.246 *** 2025-09-17 00:45:30.511701 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-17 00:45:30.511709 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-17 00:45:30.511716 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511724 | orchestrator | 2025-09-17 00:45:30.511732 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-17 00:45:30.511739 | orchestrator | Wednesday 17 September 2025 00:45:07 +0000 (0:00:00.283) 0:03:21.530 *** 2025-09-17 00:45:30.511747 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.511754 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.511762 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.511770 | orchestrator | 2025-09-17 00:45:30.511778 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-17 00:45:30.511785 | orchestrator | Wednesday 17 September 2025 00:45:07 +0000 (0:00:00.327) 0:03:21.858 *** 2025-09-17 00:45:30.511793 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.511800 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.511808 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.511816 | orchestrator | 2025-09-17 00:45:30.511823 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-17 00:45:30.511831 | orchestrator | 2025-09-17 00:45:30.511839 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-17 00:45:30.511847 | orchestrator | Wednesday 17 September 2025 00:45:08 +0000 (0:00:01.107) 0:03:22.965 *** 2025-09-17 00:45:30.511854 | orchestrator | ok: [testbed-manager] 2025-09-17 00:45:30.511862 | orchestrator | 2025-09-17 00:45:30.511870 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-17 00:45:30.511877 | orchestrator | Wednesday 17 September 2025 00:45:09 +0000 (0:00:00.155) 0:03:23.121 *** 2025-09-17 00:45:30.511885 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 00:45:30.511907 | orchestrator | 2025-09-17 00:45:30.511915 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-17 00:45:30.511923 | orchestrator | Wednesday 17 September 2025 00:45:09 +0000 (0:00:00.197) 0:03:23.318 *** 2025-09-17 00:45:30.511930 | orchestrator | changed: [testbed-manager] 2025-09-17 00:45:30.511938 | orchestrator | 2025-09-17 00:45:30.511946 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-17 00:45:30.511954 | orchestrator | 2025-09-17 00:45:30.511966 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-17 00:45:30.511974 | orchestrator | Wednesday 17 September 2025 00:45:15 +0000 (0:00:06.009) 0:03:29.328 *** 2025-09-17 00:45:30.511982 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:45:30.511990 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:45:30.511998 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:45:30.512005 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:45:30.512013 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:45:30.512021 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:45:30.512028 | orchestrator | 2025-09-17 00:45:30.512036 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-17 00:45:30.512044 | orchestrator | Wednesday 17 September 2025 00:45:16 +0000 (0:00:00.692) 0:03:30.021 *** 2025-09-17 00:45:30.512052 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-17 00:45:30.512060 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-17 00:45:30.512073 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-17 00:45:30.512081 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-17 00:45:30.512092 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-17 00:45:30.512100 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-17 00:45:30.512108 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-17 00:45:30.512116 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-17 00:45:30.512124 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-17 00:45:30.512131 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-17 00:45:30.512139 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-17 00:45:30.512147 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-17 00:45:30.512155 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-17 00:45:30.512162 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-17 00:45:30.512170 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-17 00:45:30.512178 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-17 00:45:30.512186 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-17 00:45:30.512193 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-17 00:45:30.512201 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-17 00:45:30.512209 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-17 00:45:30.512216 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-17 00:45:30.512224 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-17 00:45:30.512232 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-17 00:45:30.512240 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-17 00:45:30.512248 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-17 00:45:30.512255 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-17 00:45:30.512263 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-17 00:45:30.512271 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-17 00:45:30.512278 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-17 00:45:30.512286 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-17 00:45:30.512294 | orchestrator | 2025-09-17 00:45:30.512302 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-17 00:45:30.512310 | orchestrator | Wednesday 17 September 2025 00:45:27 +0000 (0:00:11.722) 0:03:41.743 *** 2025-09-17 00:45:30.512317 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.512325 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.512333 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.512341 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.512349 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.512357 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.512370 | orchestrator | 2025-09-17 00:45:30.512378 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-17 00:45:30.512386 | orchestrator | Wednesday 17 September 2025 00:45:28 +0000 (0:00:00.568) 0:03:42.312 *** 2025-09-17 00:45:30.512394 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:45:30.512402 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:45:30.512410 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:45:30.512417 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:45:30.512425 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:45:30.512433 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:45:30.512441 | orchestrator | 2025-09-17 00:45:30.512453 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:45:30.512462 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:45:30.512472 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-17 00:45:30.512480 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-17 00:45:30.512488 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-17 00:45:30.512496 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-17 00:45:30.512507 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-17 00:45:30.512516 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-17 00:45:30.512524 | orchestrator | 2025-09-17 00:45:30.512531 | orchestrator | 2025-09-17 00:45:30.512539 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:45:30.512547 | orchestrator | Wednesday 17 September 2025 00:45:28 +0000 (0:00:00.524) 0:03:42.836 *** 2025-09-17 00:45:30.512555 | orchestrator | =============================================================================== 2025-09-17 00:45:30.512563 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.72s 2025-09-17 00:45:30.512571 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.67s 2025-09-17 00:45:30.512579 | orchestrator | kubectl : Install required packages ------------------------------------ 14.27s 2025-09-17 00:45:30.512586 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.92s 2025-09-17 00:45:30.512594 | orchestrator | Manage labels ---------------------------------------------------------- 11.72s 2025-09-17 00:45:30.512602 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.57s 2025-09-17 00:45:30.512610 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.01s 2025-09-17 00:45:30.512618 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.86s 2025-09-17 00:45:30.512625 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.06s 2025-09-17 00:45:30.512633 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.59s 2025-09-17 00:45:30.512641 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.50s 2025-09-17 00:45:30.512649 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.24s 2025-09-17 00:45:30.512657 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.23s 2025-09-17 00:45:30.512665 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.20s 2025-09-17 00:45:30.512677 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.02s 2025-09-17 00:45:30.512685 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.02s 2025-09-17 00:45:30.512693 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.96s 2025-09-17 00:45:30.512701 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.65s 2025-09-17 00:45:30.512709 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.64s 2025-09-17 00:45:30.512716 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.61s 2025-09-17 00:45:30.512724 | orchestrator | 2025-09-17 00:45:30 | INFO  | Task cb9eef36-cfe5-455f-b553-666a4d1b4587 is in state STARTED 2025-09-17 00:45:30.512732 | orchestrator | 2025-09-17 00:45:30 | INFO  | Task ae55a675-f9b8-4d84-93eb-8b8ee4aae3f6 is in state STARTED 2025-09-17 00:45:30.512740 | orchestrator | 2025-09-17 00:45:30 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:30.512748 | orchestrator | 2025-09-17 00:45:30 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:30.512756 | orchestrator | 2025-09-17 00:45:30 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:30.512763 | orchestrator | 2025-09-17 00:45:30 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:30.512771 | orchestrator | 2025-09-17 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:33.570949 | orchestrator | 2025-09-17 00:45:33 | INFO  | Task cb9eef36-cfe5-455f-b553-666a4d1b4587 is in state STARTED 2025-09-17 00:45:33.571062 | orchestrator | 2025-09-17 00:45:33 | INFO  | Task ae55a675-f9b8-4d84-93eb-8b8ee4aae3f6 is in state STARTED 2025-09-17 00:45:33.571279 | orchestrator | 2025-09-17 00:45:33 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:33.571912 | orchestrator | 2025-09-17 00:45:33 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:33.575086 | orchestrator | 2025-09-17 00:45:33 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:33.575112 | orchestrator | 2025-09-17 00:45:33 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:33.575123 | orchestrator | 2025-09-17 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:36.605838 | orchestrator | 2025-09-17 00:45:36 | INFO  | Task cb9eef36-cfe5-455f-b553-666a4d1b4587 is in state STARTED 2025-09-17 00:45:36.607367 | orchestrator | 2025-09-17 00:45:36 | INFO  | Task ae55a675-f9b8-4d84-93eb-8b8ee4aae3f6 is in state SUCCESS 2025-09-17 00:45:36.607861 | orchestrator | 2025-09-17 00:45:36 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:36.610375 | orchestrator | 2025-09-17 00:45:36 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:36.614972 | orchestrator | 2025-09-17 00:45:36 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:36.616403 | orchestrator | 2025-09-17 00:45:36 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:36.617465 | orchestrator | 2025-09-17 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:39.648183 | orchestrator | 2025-09-17 00:45:39 | INFO  | Task cb9eef36-cfe5-455f-b553-666a4d1b4587 is in state STARTED 2025-09-17 00:45:39.648964 | orchestrator | 2025-09-17 00:45:39 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:39.650534 | orchestrator | 2025-09-17 00:45:39 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:39.652084 | orchestrator | 2025-09-17 00:45:39 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:39.653612 | orchestrator | 2025-09-17 00:45:39 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:39.654487 | orchestrator | 2025-09-17 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:42.679632 | orchestrator | 2025-09-17 00:45:42 | INFO  | Task cb9eef36-cfe5-455f-b553-666a4d1b4587 is in state SUCCESS 2025-09-17 00:45:42.680498 | orchestrator | 2025-09-17 00:45:42 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:42.681130 | orchestrator | 2025-09-17 00:45:42 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:42.681630 | orchestrator | 2025-09-17 00:45:42 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:42.682438 | orchestrator | 2025-09-17 00:45:42 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:42.682466 | orchestrator | 2025-09-17 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:45.714552 | orchestrator | 2025-09-17 00:45:45 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:45.715026 | orchestrator | 2025-09-17 00:45:45 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:45.717995 | orchestrator | 2025-09-17 00:45:45 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:45.718055 | orchestrator | 2025-09-17 00:45:45 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:45.720060 | orchestrator | 2025-09-17 00:45:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:48.758369 | orchestrator | 2025-09-17 00:45:48 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:48.758693 | orchestrator | 2025-09-17 00:45:48 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:48.759543 | orchestrator | 2025-09-17 00:45:48 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:48.760361 | orchestrator | 2025-09-17 00:45:48 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:48.760385 | orchestrator | 2025-09-17 00:45:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:51.826739 | orchestrator | 2025-09-17 00:45:51 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:51.829650 | orchestrator | 2025-09-17 00:45:51 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:51.830413 | orchestrator | 2025-09-17 00:45:51 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:51.831177 | orchestrator | 2025-09-17 00:45:51 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:51.831833 | orchestrator | 2025-09-17 00:45:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:54.884770 | orchestrator | 2025-09-17 00:45:54 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:54.887488 | orchestrator | 2025-09-17 00:45:54 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:54.889551 | orchestrator | 2025-09-17 00:45:54 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:54.891772 | orchestrator | 2025-09-17 00:45:54 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:54.891818 | orchestrator | 2025-09-17 00:45:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:45:57.932540 | orchestrator | 2025-09-17 00:45:57 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:45:57.932714 | orchestrator | 2025-09-17 00:45:57 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:45:57.933581 | orchestrator | 2025-09-17 00:45:57 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:45:57.934629 | orchestrator | 2025-09-17 00:45:57 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:45:57.934927 | orchestrator | 2025-09-17 00:45:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:00.984496 | orchestrator | 2025-09-17 00:46:00 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:00.988463 | orchestrator | 2025-09-17 00:46:00 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:00.991362 | orchestrator | 2025-09-17 00:46:00 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:00.994410 | orchestrator | 2025-09-17 00:46:00 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:00.995141 | orchestrator | 2025-09-17 00:46:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:04.033857 | orchestrator | 2025-09-17 00:46:04 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:04.034536 | orchestrator | 2025-09-17 00:46:04 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:04.039512 | orchestrator | 2025-09-17 00:46:04 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:04.041198 | orchestrator | 2025-09-17 00:46:04 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:04.041333 | orchestrator | 2025-09-17 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:07.088871 | orchestrator | 2025-09-17 00:46:07 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:07.091078 | orchestrator | 2025-09-17 00:46:07 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:07.093595 | orchestrator | 2025-09-17 00:46:07 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:07.095597 | orchestrator | 2025-09-17 00:46:07 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:07.095622 | orchestrator | 2025-09-17 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:10.136152 | orchestrator | 2025-09-17 00:46:10 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:10.137211 | orchestrator | 2025-09-17 00:46:10 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:10.139849 | orchestrator | 2025-09-17 00:46:10 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:10.141406 | orchestrator | 2025-09-17 00:46:10 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:10.141445 | orchestrator | 2025-09-17 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:13.173148 | orchestrator | 2025-09-17 00:46:13 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:13.173240 | orchestrator | 2025-09-17 00:46:13 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:13.173621 | orchestrator | 2025-09-17 00:46:13 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:13.174774 | orchestrator | 2025-09-17 00:46:13 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:13.174870 | orchestrator | 2025-09-17 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:16.218597 | orchestrator | 2025-09-17 00:46:16 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:16.221392 | orchestrator | 2025-09-17 00:46:16 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:16.225120 | orchestrator | 2025-09-17 00:46:16 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:16.226491 | orchestrator | 2025-09-17 00:46:16 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:16.226542 | orchestrator | 2025-09-17 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:19.268475 | orchestrator | 2025-09-17 00:46:19 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:19.270297 | orchestrator | 2025-09-17 00:46:19 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:19.272550 | orchestrator | 2025-09-17 00:46:19 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:19.273838 | orchestrator | 2025-09-17 00:46:19 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:19.274134 | orchestrator | 2025-09-17 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:22.314784 | orchestrator | 2025-09-17 00:46:22 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:22.316559 | orchestrator | 2025-09-17 00:46:22 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:22.319059 | orchestrator | 2025-09-17 00:46:22 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:22.321619 | orchestrator | 2025-09-17 00:46:22 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:22.321648 | orchestrator | 2025-09-17 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:25.366138 | orchestrator | 2025-09-17 00:46:25 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:25.367980 | orchestrator | 2025-09-17 00:46:25 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:25.369702 | orchestrator | 2025-09-17 00:46:25 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:25.371037 | orchestrator | 2025-09-17 00:46:25 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:25.371463 | orchestrator | 2025-09-17 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:28.408073 | orchestrator | 2025-09-17 00:46:28 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:28.410746 | orchestrator | 2025-09-17 00:46:28 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:28.413130 | orchestrator | 2025-09-17 00:46:28 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:28.415073 | orchestrator | 2025-09-17 00:46:28 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:28.415112 | orchestrator | 2025-09-17 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:31.447168 | orchestrator | 2025-09-17 00:46:31 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:31.448968 | orchestrator | 2025-09-17 00:46:31 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:31.449687 | orchestrator | 2025-09-17 00:46:31 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:31.451106 | orchestrator | 2025-09-17 00:46:31 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:31.451304 | orchestrator | 2025-09-17 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:34.494659 | orchestrator | 2025-09-17 00:46:34 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:34.495361 | orchestrator | 2025-09-17 00:46:34 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:34.497610 | orchestrator | 2025-09-17 00:46:34 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:34.499288 | orchestrator | 2025-09-17 00:46:34 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:34.499680 | orchestrator | 2025-09-17 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:37.536321 | orchestrator | 2025-09-17 00:46:37 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:37.536415 | orchestrator | 2025-09-17 00:46:37 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:37.536430 | orchestrator | 2025-09-17 00:46:37 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:37.539103 | orchestrator | 2025-09-17 00:46:37 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:37.539166 | orchestrator | 2025-09-17 00:46:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:40.572349 | orchestrator | 2025-09-17 00:46:40 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:40.574939 | orchestrator | 2025-09-17 00:46:40 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:40.577494 | orchestrator | 2025-09-17 00:46:40 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:40.579633 | orchestrator | 2025-09-17 00:46:40 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:40.579669 | orchestrator | 2025-09-17 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:43.619469 | orchestrator | 2025-09-17 00:46:43 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:43.620881 | orchestrator | 2025-09-17 00:46:43 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:43.622695 | orchestrator | 2025-09-17 00:46:43 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:43.625935 | orchestrator | 2025-09-17 00:46:43 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:43.625970 | orchestrator | 2025-09-17 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:46.671747 | orchestrator | 2025-09-17 00:46:46 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:46.672499 | orchestrator | 2025-09-17 00:46:46 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:46.675035 | orchestrator | 2025-09-17 00:46:46 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:46.676506 | orchestrator | 2025-09-17 00:46:46 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:46.676558 | orchestrator | 2025-09-17 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:49.715769 | orchestrator | 2025-09-17 00:46:49 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:49.716000 | orchestrator | 2025-09-17 00:46:49 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:49.716856 | orchestrator | 2025-09-17 00:46:49 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:49.718071 | orchestrator | 2025-09-17 00:46:49 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:49.718203 | orchestrator | 2025-09-17 00:46:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:52.760881 | orchestrator | 2025-09-17 00:46:52 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:52.761036 | orchestrator | 2025-09-17 00:46:52 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state STARTED 2025-09-17 00:46:52.762656 | orchestrator | 2025-09-17 00:46:52 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:52.763535 | orchestrator | 2025-09-17 00:46:52 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:52.763778 | orchestrator | 2025-09-17 00:46:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:55.801759 | orchestrator | 2025-09-17 00:46:55 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:55.803920 | orchestrator | 2025-09-17 00:46:55 | INFO  | Task 54ada027-b4e8-436f-8080-9457fd176c75 is in state SUCCESS 2025-09-17 00:46:55.805547 | orchestrator | 2025-09-17 00:46:55.805582 | orchestrator | 2025-09-17 00:46:55.805594 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-17 00:46:55.805606 | orchestrator | 2025-09-17 00:46:55.805617 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-17 00:46:55.805628 | orchestrator | Wednesday 17 September 2025 00:45:32 +0000 (0:00:00.129) 0:00:00.129 *** 2025-09-17 00:46:55.805640 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-17 00:46:55.805651 | orchestrator | 2025-09-17 00:46:55.805661 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-17 00:46:55.805672 | orchestrator | Wednesday 17 September 2025 00:45:33 +0000 (0:00:00.696) 0:00:00.826 *** 2025-09-17 00:46:55.805683 | orchestrator | changed: [testbed-manager] 2025-09-17 00:46:55.805694 | orchestrator | 2025-09-17 00:46:55.805705 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-17 00:46:55.805716 | orchestrator | Wednesday 17 September 2025 00:45:34 +0000 (0:00:01.170) 0:00:01.996 *** 2025-09-17 00:46:55.805726 | orchestrator | changed: [testbed-manager] 2025-09-17 00:46:55.805737 | orchestrator | 2025-09-17 00:46:55.805748 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:46:55.805759 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:46:55.805771 | orchestrator | 2025-09-17 00:46:55.805782 | orchestrator | 2025-09-17 00:46:55.805810 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:46:55.805821 | orchestrator | Wednesday 17 September 2025 00:45:34 +0000 (0:00:00.440) 0:00:02.436 *** 2025-09-17 00:46:55.805832 | orchestrator | =============================================================================== 2025-09-17 00:46:55.805842 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.17s 2025-09-17 00:46:55.805853 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-09-17 00:46:55.805864 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.44s 2025-09-17 00:46:55.805875 | orchestrator | 2025-09-17 00:46:55.805931 | orchestrator | 2025-09-17 00:46:55.805944 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-17 00:46:55.805954 | orchestrator | 2025-09-17 00:46:55.805965 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-17 00:46:55.805976 | orchestrator | Wednesday 17 September 2025 00:45:33 +0000 (0:00:00.169) 0:00:00.169 *** 2025-09-17 00:46:55.805986 | orchestrator | ok: [testbed-manager] 2025-09-17 00:46:55.805998 | orchestrator | 2025-09-17 00:46:55.806008 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-17 00:46:55.806064 | orchestrator | Wednesday 17 September 2025 00:45:33 +0000 (0:00:00.787) 0:00:00.956 *** 2025-09-17 00:46:55.806079 | orchestrator | ok: [testbed-manager] 2025-09-17 00:46:55.806089 | orchestrator | 2025-09-17 00:46:55.806100 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-17 00:46:55.806111 | orchestrator | Wednesday 17 September 2025 00:45:34 +0000 (0:00:00.569) 0:00:01.526 *** 2025-09-17 00:46:55.806121 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-17 00:46:55.806132 | orchestrator | 2025-09-17 00:46:55.806143 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-17 00:46:55.806190 | orchestrator | Wednesday 17 September 2025 00:45:35 +0000 (0:00:00.796) 0:00:02.322 *** 2025-09-17 00:46:55.806218 | orchestrator | changed: [testbed-manager] 2025-09-17 00:46:55.806232 | orchestrator | 2025-09-17 00:46:55.806244 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-17 00:46:55.806256 | orchestrator | Wednesday 17 September 2025 00:45:36 +0000 (0:00:00.908) 0:00:03.231 *** 2025-09-17 00:46:55.806268 | orchestrator | changed: [testbed-manager] 2025-09-17 00:46:55.806280 | orchestrator | 2025-09-17 00:46:55.806292 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-17 00:46:55.806305 | orchestrator | Wednesday 17 September 2025 00:45:36 +0000 (0:00:00.794) 0:00:04.025 *** 2025-09-17 00:46:55.806317 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-17 00:46:55.806329 | orchestrator | 2025-09-17 00:46:55.806340 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-17 00:46:55.806352 | orchestrator | Wednesday 17 September 2025 00:45:38 +0000 (0:00:01.441) 0:00:05.466 *** 2025-09-17 00:46:55.806365 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-17 00:46:55.806377 | orchestrator | 2025-09-17 00:46:55.806389 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-17 00:46:55.806401 | orchestrator | Wednesday 17 September 2025 00:45:39 +0000 (0:00:00.742) 0:00:06.209 *** 2025-09-17 00:46:55.806413 | orchestrator | ok: [testbed-manager] 2025-09-17 00:46:55.806426 | orchestrator | 2025-09-17 00:46:55.806438 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-17 00:46:55.806450 | orchestrator | Wednesday 17 September 2025 00:45:39 +0000 (0:00:00.369) 0:00:06.578 *** 2025-09-17 00:46:55.806462 | orchestrator | ok: [testbed-manager] 2025-09-17 00:46:55.806473 | orchestrator | 2025-09-17 00:46:55.806486 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:46:55.806499 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:46:55.806510 | orchestrator | 2025-09-17 00:46:55.806521 | orchestrator | 2025-09-17 00:46:55.806531 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:46:55.806542 | orchestrator | Wednesday 17 September 2025 00:45:39 +0000 (0:00:00.259) 0:00:06.838 *** 2025-09-17 00:46:55.806552 | orchestrator | =============================================================================== 2025-09-17 00:46:55.806563 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.44s 2025-09-17 00:46:55.806574 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.91s 2025-09-17 00:46:55.806584 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2025-09-17 00:46:55.806608 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.79s 2025-09-17 00:46:55.806629 | orchestrator | Get home directory of operator user ------------------------------------- 0.79s 2025-09-17 00:46:55.806640 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.74s 2025-09-17 00:46:55.806651 | orchestrator | Create .kube directory -------------------------------------------------- 0.57s 2025-09-17 00:46:55.806661 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2025-09-17 00:46:55.806672 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2025-09-17 00:46:55.806683 | orchestrator | 2025-09-17 00:46:55.806693 | orchestrator | 2025-09-17 00:46:55.806704 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-17 00:46:55.806715 | orchestrator | 2025-09-17 00:46:55.806725 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-17 00:46:55.806736 | orchestrator | Wednesday 17 September 2025 00:44:33 +0000 (0:00:00.079) 0:00:00.079 *** 2025-09-17 00:46:55.806747 | orchestrator | ok: [localhost] => { 2025-09-17 00:46:55.806759 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-17 00:46:55.806784 | orchestrator | } 2025-09-17 00:46:55.806796 | orchestrator | 2025-09-17 00:46:55.806807 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-17 00:46:55.806824 | orchestrator | Wednesday 17 September 2025 00:44:33 +0000 (0:00:00.040) 0:00:00.120 *** 2025-09-17 00:46:55.806836 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-17 00:46:55.806847 | orchestrator | ...ignoring 2025-09-17 00:46:55.806858 | orchestrator | 2025-09-17 00:46:55.806869 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-17 00:46:55.806879 | orchestrator | Wednesday 17 September 2025 00:44:36 +0000 (0:00:02.881) 0:00:03.002 *** 2025-09-17 00:46:55.806925 | orchestrator | skipping: [localhost] 2025-09-17 00:46:55.806937 | orchestrator | 2025-09-17 00:46:55.806947 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-17 00:46:55.806958 | orchestrator | Wednesday 17 September 2025 00:44:36 +0000 (0:00:00.160) 0:00:03.162 *** 2025-09-17 00:46:55.806969 | orchestrator | ok: [localhost] 2025-09-17 00:46:55.806979 | orchestrator | 2025-09-17 00:46:55.806990 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:46:55.807001 | orchestrator | 2025-09-17 00:46:55.807011 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:46:55.807022 | orchestrator | Wednesday 17 September 2025 00:44:37 +0000 (0:00:00.272) 0:00:03.434 *** 2025-09-17 00:46:55.807033 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:46:55.807043 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:46:55.807054 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:46:55.807065 | orchestrator | 2025-09-17 00:46:55.807075 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:46:55.807086 | orchestrator | Wednesday 17 September 2025 00:44:37 +0000 (0:00:00.394) 0:00:03.829 *** 2025-09-17 00:46:55.807096 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-17 00:46:55.807108 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-17 00:46:55.807118 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-17 00:46:55.807129 | orchestrator | 2025-09-17 00:46:55.807140 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-17 00:46:55.807150 | orchestrator | 2025-09-17 00:46:55.807161 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-17 00:46:55.807171 | orchestrator | Wednesday 17 September 2025 00:44:38 +0000 (0:00:00.586) 0:00:04.415 *** 2025-09-17 00:46:55.807182 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:46:55.807193 | orchestrator | 2025-09-17 00:46:55.807211 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-17 00:46:55.807222 | orchestrator | Wednesday 17 September 2025 00:44:38 +0000 (0:00:00.602) 0:00:05.017 *** 2025-09-17 00:46:55.807232 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:46:55.807243 | orchestrator | 2025-09-17 00:46:55.807254 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-17 00:46:55.807264 | orchestrator | Wednesday 17 September 2025 00:44:39 +0000 (0:00:01.041) 0:00:06.059 *** 2025-09-17 00:46:55.807275 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:46:55.807285 | orchestrator | 2025-09-17 00:46:55.807296 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-17 00:46:55.807307 | orchestrator | Wednesday 17 September 2025 00:44:40 +0000 (0:00:00.889) 0:00:06.949 *** 2025-09-17 00:46:55.807317 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:46:55.807328 | orchestrator | 2025-09-17 00:46:55.807339 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-17 00:46:55.807350 | orchestrator | Wednesday 17 September 2025 00:44:40 +0000 (0:00:00.406) 0:00:07.356 *** 2025-09-17 00:46:55.807360 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:46:55.807371 | orchestrator | 2025-09-17 00:46:55.807382 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-17 00:46:55.807393 | orchestrator | Wednesday 17 September 2025 00:44:41 +0000 (0:00:00.436) 0:00:07.792 *** 2025-09-17 00:46:55.807403 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:46:55.807414 | orchestrator | 2025-09-17 00:46:55.807424 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-17 00:46:55.807435 | orchestrator | Wednesday 17 September 2025 00:44:41 +0000 (0:00:00.370) 0:00:08.162 *** 2025-09-17 00:46:55.807446 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:46:55.807457 | orchestrator | 2025-09-17 00:46:55.807467 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-17 00:46:55.807485 | orchestrator | Wednesday 17 September 2025 00:44:43 +0000 (0:00:01.683) 0:00:09.845 *** 2025-09-17 00:46:55.807496 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:46:55.807507 | orchestrator | 2025-09-17 00:46:55.807518 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-17 00:46:55.807529 | orchestrator | Wednesday 17 September 2025 00:44:44 +0000 (0:00:00.898) 0:00:10.744 *** 2025-09-17 00:46:55.807540 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:46:55.807550 | orchestrator | 2025-09-17 00:46:55.807561 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-17 00:46:55.807572 | orchestrator | Wednesday 17 September 2025 00:44:44 +0000 (0:00:00.344) 0:00:11.089 *** 2025-09-17 00:46:55.807582 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:46:55.807593 | orchestrator | 2025-09-17 00:46:55.807603 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-17 00:46:55.807614 | orchestrator | Wednesday 17 September 2025 00:44:45 +0000 (0:00:00.338) 0:00:11.428 *** 2025-09-17 00:46:55.807630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:46:55.807654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:46:55.807668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:46:55.807680 | orchestrator | 2025-09-17 00:46:55.807691 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-17 00:46:55.807702 | orchestrator | Wednesday 17 September 2025 00:44:46 +0000 (0:00:01.597) 0:00:13.025 *** 2025-09-17 00:46:55.807800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:46:55.807829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:46:55.807850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:46:55.807862 | orchestrator | 2025-09-17 00:46:55.807873 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-17 00:46:55.807883 | orchestrator | Wednesday 17 September 2025 00:44:49 +0000 (0:00:02.408) 0:00:15.434 *** 2025-09-17 00:46:55.807964 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-17 00:46:55.807991 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-17 00:46:55.808003 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-17 00:46:55.808014 | orchestrator | 2025-09-17 00:46:55.808024 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-17 00:46:55.808035 | orchestrator | Wednesday 17 September 2025 00:44:50 +0000 (0:00:01.711) 0:00:17.145 *** 2025-09-17 00:46:55.808046 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-17 00:46:55.808057 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-17 00:46:55.808067 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-17 00:46:55.808092 | orchestrator | 2025-09-17 00:46:55.808114 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-17 00:46:55.808135 | orchestrator | Wednesday 17 September 2025 00:44:53 +0000 (0:00:02.258) 0:00:19.404 *** 2025-09-17 00:46:55.808147 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-17 00:46:55.808157 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-17 00:46:55.808168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-17 00:46:55.808179 | orchestrator | 2025-09-17 00:46:55.808190 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-17 00:46:55.808200 | orchestrator | Wednesday 17 September 2025 00:44:54 +0000 (0:00:01.770) 0:00:21.176 *** 2025-09-17 00:46:55.808211 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-17 00:46:55.808221 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-17 00:46:55.808240 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-17 00:46:55.808250 | orchestrator | 2025-09-17 00:46:55.808261 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-17 00:46:55.808271 | orchestrator | Wednesday 17 September 2025 00:44:56 +0000 (0:00:01.995) 0:00:23.171 *** 2025-09-17 00:46:55.808287 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-17 00:46:55.808299 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-17 00:46:55.808309 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-17 00:46:55.808320 | orchestrator | 2025-09-17 00:46:55.808331 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-17 00:46:55.808341 | orchestrator | Wednesday 17 September 2025 00:44:58 +0000 (0:00:01.404) 0:00:24.575 *** 2025-09-17 00:46:55.808352 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-17 00:46:55.808362 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-17 00:46:55.808373 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-17 00:46:55.808384 | orchestrator | 2025-09-17 00:46:55.808395 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-17 00:46:55.808405 | orchestrator | Wednesday 17 September 2025 00:45:00 +0000 (0:00:02.023) 0:00:26.599 *** 2025-09-17 00:46:55.808416 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:46:55.808427 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:46:55.808437 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:46:55.808448 | orchestrator | 2025-09-17 00:46:55.808459 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-17 00:46:55.808470 | orchestrator | Wednesday 17 September 2025 00:45:01 +0000 (0:00:01.371) 0:00:27.971 *** 2025-09-17 00:46:55.808482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:46:55.808501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:46:55.808525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:46:55.808538 | orchestrator | 2025-09-17 00:46:55.808549 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-17 00:46:55.808559 | orchestrator | Wednesday 17 September 2025 00:45:03 +0000 (0:00:01.559) 0:00:29.531 *** 2025-09-17 00:46:55.808570 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:46:55.808581 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:46:55.808591 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:46:55.808602 | orchestrator | 2025-09-17 00:46:55.808613 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-17 00:46:55.808623 | orchestrator | Wednesday 17 September 2025 00:45:03 +0000 (0:00:00.806) 0:00:30.337 *** 2025-09-17 00:46:55.808634 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:46:55.808645 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:46:55.808655 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:46:55.808666 | orchestrator | 2025-09-17 00:46:55.808677 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-17 00:46:55.808687 | orchestrator | Wednesday 17 September 2025 00:45:11 +0000 (0:00:07.302) 0:00:37.639 *** 2025-09-17 00:46:55.808698 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:46:55.808709 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:46:55.808719 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:46:55.808730 | orchestrator | 2025-09-17 00:46:55.808741 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-17 00:46:55.808751 | orchestrator | 2025-09-17 00:46:55.808762 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-17 00:46:55.808772 | orchestrator | Wednesday 17 September 2025 00:45:11 +0000 (0:00:00.583) 0:00:38.223 *** 2025-09-17 00:46:55.808783 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:46:55.808794 | orchestrator | 2025-09-17 00:46:55.808805 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-17 00:46:55.808815 | orchestrator | Wednesday 17 September 2025 00:45:12 +0000 (0:00:00.640) 0:00:38.864 *** 2025-09-17 00:46:55.808826 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:46:55.808837 | orchestrator | 2025-09-17 00:46:55.808847 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-17 00:46:55.808858 | orchestrator | Wednesday 17 September 2025 00:45:12 +0000 (0:00:00.216) 0:00:39.080 *** 2025-09-17 00:46:55.808869 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:46:55.808879 | orchestrator | 2025-09-17 00:46:55.808945 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-17 00:46:55.808959 | orchestrator | Wednesday 17 September 2025 00:45:19 +0000 (0:00:07.306) 0:00:46.387 *** 2025-09-17 00:46:55.808971 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:46:55.808981 | orchestrator | 2025-09-17 00:46:55.808992 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-17 00:46:55.809011 | orchestrator | 2025-09-17 00:46:55.809021 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-17 00:46:55.809032 | orchestrator | Wednesday 17 September 2025 00:46:12 +0000 (0:00:52.192) 0:01:38.579 *** 2025-09-17 00:46:55.809043 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:46:55.809053 | orchestrator | 2025-09-17 00:46:55.809064 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-17 00:46:55.809074 | orchestrator | Wednesday 17 September 2025 00:46:12 +0000 (0:00:00.595) 0:01:39.174 *** 2025-09-17 00:46:55.809085 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:46:55.809096 | orchestrator | 2025-09-17 00:46:55.809107 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-17 00:46:55.809117 | orchestrator | Wednesday 17 September 2025 00:46:13 +0000 (0:00:00.238) 0:01:39.413 *** 2025-09-17 00:46:55.809128 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:46:55.809138 | orchestrator | 2025-09-17 00:46:55.809149 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-17 00:46:55.809160 | orchestrator | Wednesday 17 September 2025 00:46:20 +0000 (0:00:07.207) 0:01:46.621 *** 2025-09-17 00:46:55.809170 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:46:55.809181 | orchestrator | 2025-09-17 00:46:55.809191 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-17 00:46:55.809202 | orchestrator | 2025-09-17 00:46:55.809213 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-17 00:46:55.809223 | orchestrator | Wednesday 17 September 2025 00:46:31 +0000 (0:00:11.668) 0:01:58.290 *** 2025-09-17 00:46:55.809234 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:46:55.809245 | orchestrator | 2025-09-17 00:46:55.809263 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-17 00:46:55.809274 | orchestrator | Wednesday 17 September 2025 00:46:32 +0000 (0:00:00.638) 0:01:58.929 *** 2025-09-17 00:46:55.809285 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:46:55.809296 | orchestrator | 2025-09-17 00:46:55.809306 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-17 00:46:55.809317 | orchestrator | Wednesday 17 September 2025 00:46:32 +0000 (0:00:00.262) 0:01:59.192 *** 2025-09-17 00:46:55.809328 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:46:55.809338 | orchestrator | 2025-09-17 00:46:55.809349 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-17 00:46:55.809359 | orchestrator | Wednesday 17 September 2025 00:46:34 +0000 (0:00:01.678) 0:02:00.870 *** 2025-09-17 00:46:55.809370 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:46:55.809381 | orchestrator | 2025-09-17 00:46:55.809392 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-17 00:46:55.809403 | orchestrator | 2025-09-17 00:46:55.809414 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-17 00:46:55.809424 | orchestrator | Wednesday 17 September 2025 00:46:50 +0000 (0:00:16.171) 0:02:17.041 *** 2025-09-17 00:46:55.809435 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:46:55.809446 | orchestrator | 2025-09-17 00:46:55.809456 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-17 00:46:55.809472 | orchestrator | Wednesday 17 September 2025 00:46:51 +0000 (0:00:00.520) 0:02:17.562 *** 2025-09-17 00:46:55.809483 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-17 00:46:55.809492 | orchestrator | enable_outward_rabbitmq_True 2025-09-17 00:46:55.809502 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-17 00:46:55.809512 | orchestrator | outward_rabbitmq_restart 2025-09-17 00:46:55.809521 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:46:55.809531 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:46:55.809540 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:46:55.809550 | orchestrator | 2025-09-17 00:46:55.809560 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-17 00:46:55.809576 | orchestrator | skipping: no hosts matched 2025-09-17 00:46:55.809586 | orchestrator | 2025-09-17 00:46:55.809596 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-17 00:46:55.809605 | orchestrator | skipping: no hosts matched 2025-09-17 00:46:55.809615 | orchestrator | 2025-09-17 00:46:55.809625 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-17 00:46:55.809634 | orchestrator | skipping: no hosts matched 2025-09-17 00:46:55.809644 | orchestrator | 2025-09-17 00:46:55.809653 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:46:55.809663 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-17 00:46:55.809674 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-17 00:46:55.809684 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:46:55.809693 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:46:55.809703 | orchestrator | 2025-09-17 00:46:55.809713 | orchestrator | 2025-09-17 00:46:55.809722 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:46:55.809732 | orchestrator | Wednesday 17 September 2025 00:46:53 +0000 (0:00:02.742) 0:02:20.304 *** 2025-09-17 00:46:55.809741 | orchestrator | =============================================================================== 2025-09-17 00:46:55.809751 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.03s 2025-09-17 00:46:55.809761 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 16.19s 2025-09-17 00:46:55.809771 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.30s 2025-09-17 00:46:55.809780 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.88s 2025-09-17 00:46:55.809790 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.74s 2025-09-17 00:46:55.809799 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.41s 2025-09-17 00:46:55.809809 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.26s 2025-09-17 00:46:55.809819 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.02s 2025-09-17 00:46:55.809828 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.00s 2025-09-17 00:46:55.809838 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.88s 2025-09-17 00:46:55.809848 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.77s 2025-09-17 00:46:55.809857 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.71s 2025-09-17 00:46:55.809867 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.68s 2025-09-17 00:46:55.809877 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.60s 2025-09-17 00:46:55.809886 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.56s 2025-09-17 00:46:55.809911 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.40s 2025-09-17 00:46:55.809921 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.37s 2025-09-17 00:46:55.809936 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.04s 2025-09-17 00:46:55.809946 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.90s 2025-09-17 00:46:55.809956 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 0.89s 2025-09-17 00:46:55.809966 | orchestrator | 2025-09-17 00:46:55 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:55.810103 | orchestrator | 2025-09-17 00:46:55 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:55.810119 | orchestrator | 2025-09-17 00:46:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:46:58.850521 | orchestrator | 2025-09-17 00:46:58 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:46:58.851214 | orchestrator | 2025-09-17 00:46:58 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:46:58.851965 | orchestrator | 2025-09-17 00:46:58 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:46:58.852007 | orchestrator | 2025-09-17 00:46:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:01.896155 | orchestrator | 2025-09-17 00:47:01 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:01.898195 | orchestrator | 2025-09-17 00:47:01 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:01.900162 | orchestrator | 2025-09-17 00:47:01 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:01.900189 | orchestrator | 2025-09-17 00:47:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:04.949567 | orchestrator | 2025-09-17 00:47:04 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:04.950734 | orchestrator | 2025-09-17 00:47:04 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:04.953247 | orchestrator | 2025-09-17 00:47:04 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:04.953641 | orchestrator | 2025-09-17 00:47:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:08.002801 | orchestrator | 2025-09-17 00:47:08 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:08.004583 | orchestrator | 2025-09-17 00:47:08 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:08.006784 | orchestrator | 2025-09-17 00:47:08 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:08.007121 | orchestrator | 2025-09-17 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:11.037266 | orchestrator | 2025-09-17 00:47:11 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:11.039724 | orchestrator | 2025-09-17 00:47:11 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:11.041539 | orchestrator | 2025-09-17 00:47:11 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:11.041569 | orchestrator | 2025-09-17 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:14.075418 | orchestrator | 2025-09-17 00:47:14 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:14.077682 | orchestrator | 2025-09-17 00:47:14 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:14.079546 | orchestrator | 2025-09-17 00:47:14 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:14.079661 | orchestrator | 2025-09-17 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:17.107962 | orchestrator | 2025-09-17 00:47:17 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:17.108512 | orchestrator | 2025-09-17 00:47:17 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:17.110116 | orchestrator | 2025-09-17 00:47:17 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:17.110185 | orchestrator | 2025-09-17 00:47:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:20.144680 | orchestrator | 2025-09-17 00:47:20 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:20.147984 | orchestrator | 2025-09-17 00:47:20 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:20.149968 | orchestrator | 2025-09-17 00:47:20 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:20.150595 | orchestrator | 2025-09-17 00:47:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:23.192959 | orchestrator | 2025-09-17 00:47:23 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:23.194184 | orchestrator | 2025-09-17 00:47:23 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:23.196397 | orchestrator | 2025-09-17 00:47:23 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:23.196596 | orchestrator | 2025-09-17 00:47:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:26.241196 | orchestrator | 2025-09-17 00:47:26 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:26.243300 | orchestrator | 2025-09-17 00:47:26 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:26.245256 | orchestrator | 2025-09-17 00:47:26 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:26.245750 | orchestrator | 2025-09-17 00:47:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:29.296592 | orchestrator | 2025-09-17 00:47:29 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:29.298212 | orchestrator | 2025-09-17 00:47:29 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:29.299574 | orchestrator | 2025-09-17 00:47:29 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:29.299608 | orchestrator | 2025-09-17 00:47:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:32.341569 | orchestrator | 2025-09-17 00:47:32 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:32.343366 | orchestrator | 2025-09-17 00:47:32 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:32.345481 | orchestrator | 2025-09-17 00:47:32 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:32.345510 | orchestrator | 2025-09-17 00:47:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:35.383331 | orchestrator | 2025-09-17 00:47:35 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:35.384708 | orchestrator | 2025-09-17 00:47:35 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:35.388473 | orchestrator | 2025-09-17 00:47:35 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:35.389914 | orchestrator | 2025-09-17 00:47:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:38.427805 | orchestrator | 2025-09-17 00:47:38 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:38.431264 | orchestrator | 2025-09-17 00:47:38 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:38.433439 | orchestrator | 2025-09-17 00:47:38 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:38.433742 | orchestrator | 2025-09-17 00:47:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:41.479031 | orchestrator | 2025-09-17 00:47:41 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:41.480585 | orchestrator | 2025-09-17 00:47:41 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:41.482861 | orchestrator | 2025-09-17 00:47:41 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:41.482910 | orchestrator | 2025-09-17 00:47:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:44.521061 | orchestrator | 2025-09-17 00:47:44 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:44.524524 | orchestrator | 2025-09-17 00:47:44 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state STARTED 2025-09-17 00:47:44.525135 | orchestrator | 2025-09-17 00:47:44 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:44.526195 | orchestrator | 2025-09-17 00:47:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:47.575755 | orchestrator | 2025-09-17 00:47:47 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:47.581103 | orchestrator | 2025-09-17 00:47:47.581152 | orchestrator | 2025-09-17 00:47:47.581166 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:47:47.581178 | orchestrator | 2025-09-17 00:47:47.581189 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:47:47.581200 | orchestrator | Wednesday 17 September 2025 00:45:12 +0000 (0:00:00.188) 0:00:00.188 *** 2025-09-17 00:47:47.581211 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.581223 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.581234 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.581244 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:47:47.581255 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:47:47.581266 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:47:47.581276 | orchestrator | 2025-09-17 00:47:47.581287 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:47:47.581298 | orchestrator | Wednesday 17 September 2025 00:45:12 +0000 (0:00:00.582) 0:00:00.770 *** 2025-09-17 00:47:47.581308 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-17 00:47:47.581320 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-17 00:47:47.581330 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-17 00:47:47.581341 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-17 00:47:47.581351 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-17 00:47:47.581362 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-17 00:47:47.581373 | orchestrator | 2025-09-17 00:47:47.581384 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-17 00:47:47.581394 | orchestrator | 2025-09-17 00:47:47.581405 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-17 00:47:47.581415 | orchestrator | Wednesday 17 September 2025 00:45:13 +0000 (0:00:01.318) 0:00:02.089 *** 2025-09-17 00:47:47.581476 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:47:47.581490 | orchestrator | 2025-09-17 00:47:47.581502 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-17 00:47:47.581513 | orchestrator | Wednesday 17 September 2025 00:45:15 +0000 (0:00:01.126) 0:00:03.215 *** 2025-09-17 00:47:47.581526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581616 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581627 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581649 | orchestrator | 2025-09-17 00:47:47.581674 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-17 00:47:47.581688 | orchestrator | Wednesday 17 September 2025 00:45:16 +0000 (0:00:01.685) 0:00:04.900 *** 2025-09-17 00:47:47.581701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581776 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581789 | orchestrator | 2025-09-17 00:47:47.581801 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-17 00:47:47.581814 | orchestrator | Wednesday 17 September 2025 00:45:19 +0000 (0:00:02.454) 0:00:07.355 *** 2025-09-17 00:47:47.581827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581875 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.581959 | orchestrator | 2025-09-17 00:47:47.581972 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-17 00:47:47.581985 | orchestrator | Wednesday 17 September 2025 00:45:20 +0000 (0:00:01.265) 0:00:08.620 *** 2025-09-17 00:47:47.581998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582154 | orchestrator | 2025-09-17 00:47:47.582171 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-17 00:47:47.582183 | orchestrator | Wednesday 17 September 2025 00:45:22 +0000 (0:00:02.414) 0:00:11.035 *** 2025-09-17 00:47:47.582194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582240 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582262 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.582273 | orchestrator | 2025-09-17 00:47:47.582284 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-17 00:47:47.582295 | orchestrator | Wednesday 17 September 2025 00:45:24 +0000 (0:00:02.063) 0:00:13.099 *** 2025-09-17 00:47:47.582306 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:47:47.582317 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:47:47.582328 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:47:47.582339 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:47:47.582349 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:47:47.582360 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:47:47.582371 | orchestrator | 2025-09-17 00:47:47.582381 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-17 00:47:47.582392 | orchestrator | Wednesday 17 September 2025 00:45:27 +0000 (0:00:02.791) 0:00:15.890 *** 2025-09-17 00:47:47.582403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-17 00:47:47.582414 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-17 00:47:47.582424 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-17 00:47:47.582435 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-17 00:47:47.582445 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-17 00:47:47.582456 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-17 00:47:47.582466 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 00:47:47.582477 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 00:47:47.582493 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 00:47:47.582511 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 00:47:47.582521 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 00:47:47.582532 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 00:47:47.582543 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 00:47:47.582555 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 00:47:47.582566 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 00:47:47.582577 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 00:47:47.582588 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 00:47:47.582599 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 00:47:47.582613 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 00:47:47.582625 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 00:47:47.582636 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 00:47:47.582646 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 00:47:47.582657 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 00:47:47.582667 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 00:47:47.582678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 00:47:47.582689 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 00:47:47.582699 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 00:47:47.582710 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 00:47:47.582721 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 00:47:47.582731 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 00:47:47.582742 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 00:47:47.582753 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 00:47:47.582788 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 00:47:47.582799 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 00:47:47.582810 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 00:47:47.582820 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 00:47:47.582831 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-17 00:47:47.582842 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-17 00:47:47.582859 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-17 00:47:47.582870 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-17 00:47:47.582880 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-17 00:47:47.582941 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-17 00:47:47.582953 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-17 00:47:47.582965 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-17 00:47:47.582983 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-17 00:47:47.582994 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-17 00:47:47.583005 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-17 00:47:47.583016 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-17 00:47:47.583027 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-17 00:47:47.583037 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-17 00:47:47.583049 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-17 00:47:47.583059 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-17 00:47:47.583070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-17 00:47:47.583081 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-17 00:47:47.583097 | orchestrator | 2025-09-17 00:47:47.583108 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 00:47:47.583119 | orchestrator | Wednesday 17 September 2025 00:45:47 +0000 (0:00:19.416) 0:00:35.307 *** 2025-09-17 00:47:47.583130 | orchestrator | 2025-09-17 00:47:47.583141 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 00:47:47.583151 | orchestrator | Wednesday 17 September 2025 00:45:47 +0000 (0:00:00.177) 0:00:35.485 *** 2025-09-17 00:47:47.583162 | orchestrator | 2025-09-17 00:47:47.583173 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 00:47:47.583184 | orchestrator | Wednesday 17 September 2025 00:45:47 +0000 (0:00:00.068) 0:00:35.553 *** 2025-09-17 00:47:47.583194 | orchestrator | 2025-09-17 00:47:47.583205 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 00:47:47.583216 | orchestrator | Wednesday 17 September 2025 00:45:47 +0000 (0:00:00.060) 0:00:35.613 *** 2025-09-17 00:47:47.583226 | orchestrator | 2025-09-17 00:47:47.583237 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 00:47:47.583248 | orchestrator | Wednesday 17 September 2025 00:45:47 +0000 (0:00:00.059) 0:00:35.673 *** 2025-09-17 00:47:47.583258 | orchestrator | 2025-09-17 00:47:47.583269 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 00:47:47.583280 | orchestrator | Wednesday 17 September 2025 00:45:47 +0000 (0:00:00.059) 0:00:35.732 *** 2025-09-17 00:47:47.583290 | orchestrator | 2025-09-17 00:47:47.583301 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-17 00:47:47.583318 | orchestrator | Wednesday 17 September 2025 00:45:47 +0000 (0:00:00.059) 0:00:35.792 *** 2025-09-17 00:47:47.583329 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.583340 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:47:47.583350 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.583361 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:47:47.583372 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.583382 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:47:47.583393 | orchestrator | 2025-09-17 00:47:47.583403 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-17 00:47:47.583414 | orchestrator | Wednesday 17 September 2025 00:45:49 +0000 (0:00:01.523) 0:00:37.315 *** 2025-09-17 00:47:47.583425 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:47:47.583436 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:47:47.583446 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:47:47.583457 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:47:47.583467 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:47:47.583478 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:47:47.583489 | orchestrator | 2025-09-17 00:47:47.583499 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-17 00:47:47.583510 | orchestrator | 2025-09-17 00:47:47.583521 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-17 00:47:47.583531 | orchestrator | Wednesday 17 September 2025 00:46:25 +0000 (0:00:35.836) 0:01:13.152 *** 2025-09-17 00:47:47.583542 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:47:47.583553 | orchestrator | 2025-09-17 00:47:47.583563 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-17 00:47:47.583574 | orchestrator | Wednesday 17 September 2025 00:46:25 +0000 (0:00:00.698) 0:01:13.850 *** 2025-09-17 00:47:47.583585 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:47:47.583595 | orchestrator | 2025-09-17 00:47:47.583606 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-17 00:47:47.583617 | orchestrator | Wednesday 17 September 2025 00:46:26 +0000 (0:00:00.514) 0:01:14.364 *** 2025-09-17 00:47:47.583627 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.583638 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.583649 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.583659 | orchestrator | 2025-09-17 00:47:47.583670 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-17 00:47:47.583681 | orchestrator | Wednesday 17 September 2025 00:46:27 +0000 (0:00:00.982) 0:01:15.347 *** 2025-09-17 00:47:47.583692 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.583702 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.583713 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.583729 | orchestrator | 2025-09-17 00:47:47.583740 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-17 00:47:47.583751 | orchestrator | Wednesday 17 September 2025 00:46:27 +0000 (0:00:00.366) 0:01:15.714 *** 2025-09-17 00:47:47.583762 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.583773 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.583783 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.583794 | orchestrator | 2025-09-17 00:47:47.583805 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-17 00:47:47.583816 | orchestrator | Wednesday 17 September 2025 00:46:28 +0000 (0:00:00.427) 0:01:16.142 *** 2025-09-17 00:47:47.583826 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.583837 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.583848 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.583858 | orchestrator | 2025-09-17 00:47:47.583869 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-17 00:47:47.583880 | orchestrator | Wednesday 17 September 2025 00:46:28 +0000 (0:00:00.394) 0:01:16.536 *** 2025-09-17 00:47:47.583921 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.583932 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.583942 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.583953 | orchestrator | 2025-09-17 00:47:47.583964 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-17 00:47:47.583975 | orchestrator | Wednesday 17 September 2025 00:46:28 +0000 (0:00:00.548) 0:01:17.084 *** 2025-09-17 00:47:47.583985 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.583996 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584006 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584017 | orchestrator | 2025-09-17 00:47:47.584027 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-17 00:47:47.584038 | orchestrator | Wednesday 17 September 2025 00:46:29 +0000 (0:00:00.309) 0:01:17.393 *** 2025-09-17 00:47:47.584053 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584063 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584074 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584084 | orchestrator | 2025-09-17 00:47:47.584095 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-17 00:47:47.584106 | orchestrator | Wednesday 17 September 2025 00:46:29 +0000 (0:00:00.328) 0:01:17.722 *** 2025-09-17 00:47:47.584116 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584127 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584138 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584148 | orchestrator | 2025-09-17 00:47:47.584159 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-17 00:47:47.584170 | orchestrator | Wednesday 17 September 2025 00:46:29 +0000 (0:00:00.320) 0:01:18.042 *** 2025-09-17 00:47:47.584180 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584191 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584201 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584212 | orchestrator | 2025-09-17 00:47:47.584222 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-17 00:47:47.584233 | orchestrator | Wednesday 17 September 2025 00:46:30 +0000 (0:00:00.659) 0:01:18.701 *** 2025-09-17 00:47:47.584244 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584254 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584265 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584275 | orchestrator | 2025-09-17 00:47:47.584286 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-17 00:47:47.584296 | orchestrator | Wednesday 17 September 2025 00:46:30 +0000 (0:00:00.317) 0:01:19.018 *** 2025-09-17 00:47:47.584307 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584318 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584328 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584339 | orchestrator | 2025-09-17 00:47:47.584349 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-17 00:47:47.584360 | orchestrator | Wednesday 17 September 2025 00:46:31 +0000 (0:00:00.321) 0:01:19.340 *** 2025-09-17 00:47:47.584371 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584381 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584392 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584402 | orchestrator | 2025-09-17 00:47:47.584413 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-17 00:47:47.584423 | orchestrator | Wednesday 17 September 2025 00:46:31 +0000 (0:00:00.316) 0:01:19.656 *** 2025-09-17 00:47:47.584434 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584444 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584455 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584466 | orchestrator | 2025-09-17 00:47:47.584476 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-17 00:47:47.584487 | orchestrator | Wednesday 17 September 2025 00:46:31 +0000 (0:00:00.326) 0:01:19.983 *** 2025-09-17 00:47:47.584497 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584513 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584524 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584535 | orchestrator | 2025-09-17 00:47:47.584546 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-17 00:47:47.584556 | orchestrator | Wednesday 17 September 2025 00:46:32 +0000 (0:00:00.490) 0:01:20.473 *** 2025-09-17 00:47:47.584567 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584578 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584588 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584599 | orchestrator | 2025-09-17 00:47:47.584610 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-17 00:47:47.584620 | orchestrator | Wednesday 17 September 2025 00:46:32 +0000 (0:00:00.323) 0:01:20.797 *** 2025-09-17 00:47:47.584631 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584642 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584652 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584663 | orchestrator | 2025-09-17 00:47:47.584674 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-17 00:47:47.584684 | orchestrator | Wednesday 17 September 2025 00:46:32 +0000 (0:00:00.286) 0:01:21.083 *** 2025-09-17 00:47:47.584695 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584706 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584722 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584733 | orchestrator | 2025-09-17 00:47:47.584744 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-17 00:47:47.584755 | orchestrator | Wednesday 17 September 2025 00:46:33 +0000 (0:00:00.306) 0:01:21.390 *** 2025-09-17 00:47:47.584766 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:47:47.584777 | orchestrator | 2025-09-17 00:47:47.584787 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-17 00:47:47.584798 | orchestrator | Wednesday 17 September 2025 00:46:33 +0000 (0:00:00.715) 0:01:22.105 *** 2025-09-17 00:47:47.584809 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.584819 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.584830 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.584841 | orchestrator | 2025-09-17 00:47:47.584851 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-17 00:47:47.584862 | orchestrator | Wednesday 17 September 2025 00:46:34 +0000 (0:00:00.479) 0:01:22.585 *** 2025-09-17 00:47:47.584873 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.584883 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.584912 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.584923 | orchestrator | 2025-09-17 00:47:47.584934 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-17 00:47:47.584945 | orchestrator | Wednesday 17 September 2025 00:46:34 +0000 (0:00:00.427) 0:01:23.012 *** 2025-09-17 00:47:47.584955 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.584966 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.584977 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.584987 | orchestrator | 2025-09-17 00:47:47.584998 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-17 00:47:47.585013 | orchestrator | Wednesday 17 September 2025 00:46:35 +0000 (0:00:00.511) 0:01:23.524 *** 2025-09-17 00:47:47.585024 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.585035 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.585045 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.585056 | orchestrator | 2025-09-17 00:47:47.585067 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-17 00:47:47.585078 | orchestrator | Wednesday 17 September 2025 00:46:35 +0000 (0:00:00.358) 0:01:23.882 *** 2025-09-17 00:47:47.585089 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.585099 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.585116 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.585127 | orchestrator | 2025-09-17 00:47:47.585138 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-17 00:47:47.585148 | orchestrator | Wednesday 17 September 2025 00:46:36 +0000 (0:00:00.339) 0:01:24.221 *** 2025-09-17 00:47:47.585159 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.585170 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.585180 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.585191 | orchestrator | 2025-09-17 00:47:47.585202 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-17 00:47:47.585212 | orchestrator | Wednesday 17 September 2025 00:46:36 +0000 (0:00:00.485) 0:01:24.707 *** 2025-09-17 00:47:47.585223 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.585233 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.585244 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.585255 | orchestrator | 2025-09-17 00:47:47.585266 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-17 00:47:47.585276 | orchestrator | Wednesday 17 September 2025 00:46:37 +0000 (0:00:00.715) 0:01:25.423 *** 2025-09-17 00:47:47.585287 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.585298 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.585308 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.585319 | orchestrator | 2025-09-17 00:47:47.585329 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-17 00:47:47.585340 | orchestrator | Wednesday 17 September 2025 00:46:37 +0000 (0:00:00.472) 0:01:25.896 *** 2025-09-17 00:47:47.585351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585469 | orchestrator | 2025-09-17 00:47:47.585480 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-17 00:47:47.585516 | orchestrator | Wednesday 17 September 2025 00:46:39 +0000 (0:00:01.457) 0:01:27.353 *** 2025-09-17 00:47:47.585529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585647 | orchestrator | 2025-09-17 00:47:47.585658 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-17 00:47:47.585668 | orchestrator | Wednesday 17 September 2025 00:46:43 +0000 (0:00:04.113) 0:01:31.467 *** 2025-09-17 00:47:47.585680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.585796 | orchestrator | 2025-09-17 00:47:47.585807 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 00:47:47.585818 | orchestrator | Wednesday 17 September 2025 00:46:45 +0000 (0:00:02.331) 0:01:33.798 *** 2025-09-17 00:47:47.585829 | orchestrator | 2025-09-17 00:47:47.585839 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 00:47:47.585850 | orchestrator | Wednesday 17 September 2025 00:46:45 +0000 (0:00:00.064) 0:01:33.862 *** 2025-09-17 00:47:47.585861 | orchestrator | 2025-09-17 00:47:47.585871 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 00:47:47.585882 | orchestrator | Wednesday 17 September 2025 00:46:45 +0000 (0:00:00.059) 0:01:33.922 *** 2025-09-17 00:47:47.585970 | orchestrator | 2025-09-17 00:47:47.585985 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-17 00:47:47.585995 | orchestrator | Wednesday 17 September 2025 00:46:45 +0000 (0:00:00.068) 0:01:33.991 *** 2025-09-17 00:47:47.586006 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:47:47.586048 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:47:47.586061 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:47:47.586072 | orchestrator | 2025-09-17 00:47:47.586082 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-17 00:47:47.586093 | orchestrator | Wednesday 17 September 2025 00:46:54 +0000 (0:00:08.246) 0:01:42.238 *** 2025-09-17 00:47:47.586104 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:47:47.586114 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:47:47.586125 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:47:47.586135 | orchestrator | 2025-09-17 00:47:47.586146 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-17 00:47:47.586157 | orchestrator | Wednesday 17 September 2025 00:47:01 +0000 (0:00:07.028) 0:01:49.267 *** 2025-09-17 00:47:47.586167 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:47:47.586178 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:47:47.586189 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:47:47.586199 | orchestrator | 2025-09-17 00:47:47.586210 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-17 00:47:47.586220 | orchestrator | Wednesday 17 September 2025 00:47:07 +0000 (0:00:06.697) 0:01:55.965 *** 2025-09-17 00:47:47.586231 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.586241 | orchestrator | 2025-09-17 00:47:47.586252 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-17 00:47:47.586263 | orchestrator | Wednesday 17 September 2025 00:47:08 +0000 (0:00:00.291) 0:01:56.256 *** 2025-09-17 00:47:47.586273 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.586284 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.586294 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.586313 | orchestrator | 2025-09-17 00:47:47.586324 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-17 00:47:47.586335 | orchestrator | Wednesday 17 September 2025 00:47:08 +0000 (0:00:00.786) 0:01:57.042 *** 2025-09-17 00:47:47.586345 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.586356 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.586366 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:47:47.586377 | orchestrator | 2025-09-17 00:47:47.586388 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-17 00:47:47.586398 | orchestrator | Wednesday 17 September 2025 00:47:09 +0000 (0:00:00.622) 0:01:57.665 *** 2025-09-17 00:47:47.586409 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.586419 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.586429 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.586438 | orchestrator | 2025-09-17 00:47:47.586448 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-17 00:47:47.586457 | orchestrator | Wednesday 17 September 2025 00:47:10 +0000 (0:00:00.841) 0:01:58.507 *** 2025-09-17 00:47:47.586466 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.586476 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.586485 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:47:47.586494 | orchestrator | 2025-09-17 00:47:47.586504 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-17 00:47:47.586513 | orchestrator | Wednesday 17 September 2025 00:47:11 +0000 (0:00:00.733) 0:01:59.240 *** 2025-09-17 00:47:47.586523 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.586532 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.586549 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.586559 | orchestrator | 2025-09-17 00:47:47.586568 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-17 00:47:47.586578 | orchestrator | Wednesday 17 September 2025 00:47:12 +0000 (0:00:01.307) 0:02:00.548 *** 2025-09-17 00:47:47.586588 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.586597 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.586606 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.586616 | orchestrator | 2025-09-17 00:47:47.586625 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-17 00:47:47.586635 | orchestrator | Wednesday 17 September 2025 00:47:13 +0000 (0:00:00.749) 0:02:01.298 *** 2025-09-17 00:47:47.586644 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.586654 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.586663 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.586672 | orchestrator | 2025-09-17 00:47:47.586682 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-17 00:47:47.586692 | orchestrator | Wednesday 17 September 2025 00:47:13 +0000 (0:00:00.273) 0:02:01.572 *** 2025-09-17 00:47:47.586702 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586716 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586727 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586737 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586753 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586763 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586783 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586803 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586813 | orchestrator | 2025-09-17 00:47:47.586823 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-17 00:47:47.586832 | orchestrator | Wednesday 17 September 2025 00:47:14 +0000 (0:00:01.518) 0:02:03.090 *** 2025-09-17 00:47:47.586842 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586852 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586866 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586916 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586937 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.586966 | orchestrator | 2025-09-17 00:47:47.586976 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-17 00:47:47.586985 | orchestrator | Wednesday 17 September 2025 00:47:19 +0000 (0:00:04.068) 0:02:07.158 *** 2025-09-17 00:47:47.587000 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.587024 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.587034 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.587058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.587073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.587083 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.587093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.587103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.587113 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 00:47:47.587123 | orchestrator | 2025-09-17 00:47:47.587133 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 00:47:47.587143 | orchestrator | Wednesday 17 September 2025 00:47:21 +0000 (0:00:02.951) 0:02:10.110 *** 2025-09-17 00:47:47.587153 | orchestrator | 2025-09-17 00:47:47.587162 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 00:47:47.587172 | orchestrator | Wednesday 17 September 2025 00:47:22 +0000 (0:00:00.065) 0:02:10.175 *** 2025-09-17 00:47:47.587181 | orchestrator | 2025-09-17 00:47:47.587191 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 00:47:47.587200 | orchestrator | Wednesday 17 September 2025 00:47:22 +0000 (0:00:00.062) 0:02:10.238 *** 2025-09-17 00:47:47.587210 | orchestrator | 2025-09-17 00:47:47.587219 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-17 00:47:47.587229 | orchestrator | Wednesday 17 September 2025 00:47:22 +0000 (0:00:00.059) 0:02:10.297 *** 2025-09-17 00:47:47.587238 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:47:47.587248 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:47:47.587258 | orchestrator | 2025-09-17 00:47:47.587272 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-17 00:47:47.587282 | orchestrator | Wednesday 17 September 2025 00:47:28 +0000 (0:00:06.500) 0:02:16.798 *** 2025-09-17 00:47:47.587291 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:47:47.587301 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:47:47.587310 | orchestrator | 2025-09-17 00:47:47.587320 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-17 00:47:47.587329 | orchestrator | Wednesday 17 September 2025 00:47:34 +0000 (0:00:06.198) 0:02:22.996 *** 2025-09-17 00:47:47.587344 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:47:47.587354 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:47:47.587363 | orchestrator | 2025-09-17 00:47:47.587373 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-17 00:47:47.587382 | orchestrator | Wednesday 17 September 2025 00:47:41 +0000 (0:00:06.375) 0:02:29.372 *** 2025-09-17 00:47:47.587392 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:47:47.587401 | orchestrator | 2025-09-17 00:47:47.587411 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-17 00:47:47.587420 | orchestrator | Wednesday 17 September 2025 00:47:41 +0000 (0:00:00.136) 0:02:29.508 *** 2025-09-17 00:47:47.587429 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.587439 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.587449 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.587458 | orchestrator | 2025-09-17 00:47:47.587467 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-17 00:47:47.587477 | orchestrator | Wednesday 17 September 2025 00:47:42 +0000 (0:00:00.786) 0:02:30.295 *** 2025-09-17 00:47:47.587486 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.587496 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.587505 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:47:47.587515 | orchestrator | 2025-09-17 00:47:47.587528 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-17 00:47:47.587538 | orchestrator | Wednesday 17 September 2025 00:47:42 +0000 (0:00:00.713) 0:02:31.008 *** 2025-09-17 00:47:47.587548 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.587557 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.587567 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.587576 | orchestrator | 2025-09-17 00:47:47.587586 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-17 00:47:47.587595 | orchestrator | Wednesday 17 September 2025 00:47:43 +0000 (0:00:00.809) 0:02:31.818 *** 2025-09-17 00:47:47.587605 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:47:47.587614 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:47:47.587624 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:47:47.587633 | orchestrator | 2025-09-17 00:47:47.587643 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-17 00:47:47.587652 | orchestrator | Wednesday 17 September 2025 00:47:44 +0000 (0:00:00.671) 0:02:32.489 *** 2025-09-17 00:47:47.587662 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.587672 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.587681 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.587691 | orchestrator | 2025-09-17 00:47:47.587700 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-17 00:47:47.587710 | orchestrator | Wednesday 17 September 2025 00:47:45 +0000 (0:00:00.829) 0:02:33.319 *** 2025-09-17 00:47:47.587719 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:47:47.587729 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:47:47.587738 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:47:47.587748 | orchestrator | 2025-09-17 00:47:47.587757 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:47:47.587767 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-17 00:47:47.587776 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-17 00:47:47.587786 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-17 00:47:47.587796 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:47:47.587806 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:47:47.587820 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:47:47.587830 | orchestrator | 2025-09-17 00:47:47.587840 | orchestrator | 2025-09-17 00:47:47.587849 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:47:47.587859 | orchestrator | Wednesday 17 September 2025 00:47:46 +0000 (0:00:00.848) 0:02:34.167 *** 2025-09-17 00:47:47.587868 | orchestrator | =============================================================================== 2025-09-17 00:47:47.587878 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.84s 2025-09-17 00:47:47.587887 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.42s 2025-09-17 00:47:47.587912 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.75s 2025-09-17 00:47:47.587922 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.23s 2025-09-17 00:47:47.587931 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.07s 2025-09-17 00:47:47.587941 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.11s 2025-09-17 00:47:47.587950 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.07s 2025-09-17 00:47:47.587964 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.95s 2025-09-17 00:47:47.587975 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.79s 2025-09-17 00:47:47.587984 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.45s 2025-09-17 00:47:47.587994 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.41s 2025-09-17 00:47:47.588003 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.33s 2025-09-17 00:47:47.588013 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.06s 2025-09-17 00:47:47.588022 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.69s 2025-09-17 00:47:47.588032 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.52s 2025-09-17 00:47:47.588041 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.52s 2025-09-17 00:47:47.588050 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2025-09-17 00:47:47.588060 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.32s 2025-09-17 00:47:47.588069 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.31s 2025-09-17 00:47:47.588079 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.27s 2025-09-17 00:47:47.588088 | orchestrator | 2025-09-17 00:47:47 | INFO  | Task 244139c3-8723-48d0-af94-31feb89ffbec is in state SUCCESS 2025-09-17 00:47:47.588102 | orchestrator | 2025-09-17 00:47:47 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:47.588112 | orchestrator | 2025-09-17 00:47:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:50.629535 | orchestrator | 2025-09-17 00:47:50 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:50.630783 | orchestrator | 2025-09-17 00:47:50 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:50.631003 | orchestrator | 2025-09-17 00:47:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:53.666625 | orchestrator | 2025-09-17 00:47:53 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:53.668814 | orchestrator | 2025-09-17 00:47:53 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:53.669164 | orchestrator | 2025-09-17 00:47:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:56.721662 | orchestrator | 2025-09-17 00:47:56 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:56.723838 | orchestrator | 2025-09-17 00:47:56 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:56.724289 | orchestrator | 2025-09-17 00:47:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:47:59.778318 | orchestrator | 2025-09-17 00:47:59 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:47:59.779573 | orchestrator | 2025-09-17 00:47:59 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:47:59.779830 | orchestrator | 2025-09-17 00:47:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:02.822010 | orchestrator | 2025-09-17 00:48:02 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:02.823590 | orchestrator | 2025-09-17 00:48:02 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:02.823617 | orchestrator | 2025-09-17 00:48:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:05.868021 | orchestrator | 2025-09-17 00:48:05 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:05.868431 | orchestrator | 2025-09-17 00:48:05 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:05.868874 | orchestrator | 2025-09-17 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:08.903560 | orchestrator | 2025-09-17 00:48:08 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:08.906260 | orchestrator | 2025-09-17 00:48:08 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:08.906389 | orchestrator | 2025-09-17 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:11.951488 | orchestrator | 2025-09-17 00:48:11 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:11.954178 | orchestrator | 2025-09-17 00:48:11 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:11.954211 | orchestrator | 2025-09-17 00:48:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:15.004984 | orchestrator | 2025-09-17 00:48:15 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:15.006544 | orchestrator | 2025-09-17 00:48:15 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:15.006882 | orchestrator | 2025-09-17 00:48:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:18.056758 | orchestrator | 2025-09-17 00:48:18 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:18.058593 | orchestrator | 2025-09-17 00:48:18 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:18.060470 | orchestrator | 2025-09-17 00:48:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:21.105959 | orchestrator | 2025-09-17 00:48:21 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:21.108318 | orchestrator | 2025-09-17 00:48:21 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:21.108651 | orchestrator | 2025-09-17 00:48:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:24.145991 | orchestrator | 2025-09-17 00:48:24 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:24.146918 | orchestrator | 2025-09-17 00:48:24 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:24.146990 | orchestrator | 2025-09-17 00:48:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:27.178213 | orchestrator | 2025-09-17 00:48:27 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:27.178514 | orchestrator | 2025-09-17 00:48:27 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:27.178540 | orchestrator | 2025-09-17 00:48:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:30.226954 | orchestrator | 2025-09-17 00:48:30 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:30.229326 | orchestrator | 2025-09-17 00:48:30 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:30.229357 | orchestrator | 2025-09-17 00:48:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:33.284863 | orchestrator | 2025-09-17 00:48:33 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:33.288158 | orchestrator | 2025-09-17 00:48:33 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:33.288477 | orchestrator | 2025-09-17 00:48:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:36.326442 | orchestrator | 2025-09-17 00:48:36 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:36.326629 | orchestrator | 2025-09-17 00:48:36 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:36.326648 | orchestrator | 2025-09-17 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:39.363368 | orchestrator | 2025-09-17 00:48:39 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:39.364741 | orchestrator | 2025-09-17 00:48:39 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:39.364769 | orchestrator | 2025-09-17 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:42.400183 | orchestrator | 2025-09-17 00:48:42 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:42.402710 | orchestrator | 2025-09-17 00:48:42 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:42.402754 | orchestrator | 2025-09-17 00:48:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:45.449520 | orchestrator | 2025-09-17 00:48:45 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:45.449628 | orchestrator | 2025-09-17 00:48:45 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:45.449643 | orchestrator | 2025-09-17 00:48:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:48.487122 | orchestrator | 2025-09-17 00:48:48 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:48.487727 | orchestrator | 2025-09-17 00:48:48 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:48.487887 | orchestrator | 2025-09-17 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:51.538385 | orchestrator | 2025-09-17 00:48:51 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:51.539674 | orchestrator | 2025-09-17 00:48:51 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:51.539790 | orchestrator | 2025-09-17 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:54.591691 | orchestrator | 2025-09-17 00:48:54 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:54.593136 | orchestrator | 2025-09-17 00:48:54 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:54.593517 | orchestrator | 2025-09-17 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:48:57.642848 | orchestrator | 2025-09-17 00:48:57 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:48:57.644777 | orchestrator | 2025-09-17 00:48:57 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:48:57.644809 | orchestrator | 2025-09-17 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:00.687551 | orchestrator | 2025-09-17 00:49:00 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:00.689773 | orchestrator | 2025-09-17 00:49:00 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:00.690374 | orchestrator | 2025-09-17 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:03.735985 | orchestrator | 2025-09-17 00:49:03 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:03.736203 | orchestrator | 2025-09-17 00:49:03 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:03.736227 | orchestrator | 2025-09-17 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:06.777721 | orchestrator | 2025-09-17 00:49:06 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:06.778243 | orchestrator | 2025-09-17 00:49:06 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:06.778377 | orchestrator | 2025-09-17 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:09.817423 | orchestrator | 2025-09-17 00:49:09 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:09.818287 | orchestrator | 2025-09-17 00:49:09 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:09.818318 | orchestrator | 2025-09-17 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:12.859040 | orchestrator | 2025-09-17 00:49:12 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:12.859154 | orchestrator | 2025-09-17 00:49:12 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:12.859170 | orchestrator | 2025-09-17 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:15.898644 | orchestrator | 2025-09-17 00:49:15 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:15.898752 | orchestrator | 2025-09-17 00:49:15 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:15.899043 | orchestrator | 2025-09-17 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:18.936747 | orchestrator | 2025-09-17 00:49:18 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:18.937400 | orchestrator | 2025-09-17 00:49:18 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:18.937432 | orchestrator | 2025-09-17 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:21.973571 | orchestrator | 2025-09-17 00:49:21 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:21.975586 | orchestrator | 2025-09-17 00:49:21 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:21.975621 | orchestrator | 2025-09-17 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:25.027473 | orchestrator | 2025-09-17 00:49:25 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:25.028607 | orchestrator | 2025-09-17 00:49:25 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:25.028639 | orchestrator | 2025-09-17 00:49:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:28.081486 | orchestrator | 2025-09-17 00:49:28 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:28.084141 | orchestrator | 2025-09-17 00:49:28 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:28.084176 | orchestrator | 2025-09-17 00:49:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:31.139138 | orchestrator | 2025-09-17 00:49:31 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:31.139842 | orchestrator | 2025-09-17 00:49:31 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:31.140096 | orchestrator | 2025-09-17 00:49:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:34.206295 | orchestrator | 2025-09-17 00:49:34 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:34.206642 | orchestrator | 2025-09-17 00:49:34 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:34.207214 | orchestrator | 2025-09-17 00:49:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:37.249780 | orchestrator | 2025-09-17 00:49:37 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:37.260265 | orchestrator | 2025-09-17 00:49:37 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:37.260571 | orchestrator | 2025-09-17 00:49:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:40.305944 | orchestrator | 2025-09-17 00:49:40 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:40.309639 | orchestrator | 2025-09-17 00:49:40 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:40.310357 | orchestrator | 2025-09-17 00:49:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:43.352515 | orchestrator | 2025-09-17 00:49:43 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:43.353976 | orchestrator | 2025-09-17 00:49:43 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:43.354409 | orchestrator | 2025-09-17 00:49:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:46.392834 | orchestrator | 2025-09-17 00:49:46 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:46.394583 | orchestrator | 2025-09-17 00:49:46 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:46.394622 | orchestrator | 2025-09-17 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:49.430263 | orchestrator | 2025-09-17 00:49:49 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:49.433061 | orchestrator | 2025-09-17 00:49:49 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:49.433100 | orchestrator | 2025-09-17 00:49:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:52.471201 | orchestrator | 2025-09-17 00:49:52 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:52.472544 | orchestrator | 2025-09-17 00:49:52 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:52.472609 | orchestrator | 2025-09-17 00:49:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:55.512963 | orchestrator | 2025-09-17 00:49:55 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:55.513172 | orchestrator | 2025-09-17 00:49:55 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:55.513205 | orchestrator | 2025-09-17 00:49:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:49:58.556732 | orchestrator | 2025-09-17 00:49:58 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:49:58.558802 | orchestrator | 2025-09-17 00:49:58 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:49:58.559093 | orchestrator | 2025-09-17 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:01.598238 | orchestrator | 2025-09-17 00:50:01 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:01.599217 | orchestrator | 2025-09-17 00:50:01 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:50:01.599858 | orchestrator | 2025-09-17 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:04.653096 | orchestrator | 2025-09-17 00:50:04 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:04.654861 | orchestrator | 2025-09-17 00:50:04 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:50:04.654914 | orchestrator | 2025-09-17 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:07.703660 | orchestrator | 2025-09-17 00:50:07 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:07.703752 | orchestrator | 2025-09-17 00:50:07 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:50:07.703767 | orchestrator | 2025-09-17 00:50:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:10.746722 | orchestrator | 2025-09-17 00:50:10 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:10.748058 | orchestrator | 2025-09-17 00:50:10 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:50:10.748211 | orchestrator | 2025-09-17 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:13.782377 | orchestrator | 2025-09-17 00:50:13 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:13.782961 | orchestrator | 2025-09-17 00:50:13 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:50:13.782994 | orchestrator | 2025-09-17 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:16.821471 | orchestrator | 2025-09-17 00:50:16 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:16.823036 | orchestrator | 2025-09-17 00:50:16 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:50:16.823128 | orchestrator | 2025-09-17 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:19.870474 | orchestrator | 2025-09-17 00:50:19 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:19.870713 | orchestrator | 2025-09-17 00:50:19 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:50:19.871107 | orchestrator | 2025-09-17 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:22.910829 | orchestrator | 2025-09-17 00:50:22 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:22.911157 | orchestrator | 2025-09-17 00:50:22 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:50:22.911190 | orchestrator | 2025-09-17 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:25.958884 | orchestrator | 2025-09-17 00:50:25 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:25.960637 | orchestrator | 2025-09-17 00:50:25 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state STARTED 2025-09-17 00:50:25.960952 | orchestrator | 2025-09-17 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:29.006445 | orchestrator | 2025-09-17 00:50:29 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:29.007624 | orchestrator | 2025-09-17 00:50:29 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:29.017092 | orchestrator | 2025-09-17 00:50:29 | INFO  | Task 101b5619-6778-4a9c-bd8d-f0d9b65f7fc3 is in state SUCCESS 2025-09-17 00:50:29.019193 | orchestrator | 2025-09-17 00:50:29.019265 | orchestrator | 2025-09-17 00:50:29.019279 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:50:29.019291 | orchestrator | 2025-09-17 00:50:29.019395 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:50:29.019408 | orchestrator | Wednesday 17 September 2025 00:44:14 +0000 (0:00:00.292) 0:00:00.292 *** 2025-09-17 00:50:29.019419 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.019432 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.019443 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.019454 | orchestrator | 2025-09-17 00:50:29.019465 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:50:29.019477 | orchestrator | Wednesday 17 September 2025 00:44:15 +0000 (0:00:00.378) 0:00:00.671 *** 2025-09-17 00:50:29.019489 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-17 00:50:29.019584 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-17 00:50:29.019597 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-17 00:50:29.019608 | orchestrator | 2025-09-17 00:50:29.019618 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-17 00:50:29.019629 | orchestrator | 2025-09-17 00:50:29.019640 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-17 00:50:29.019651 | orchestrator | Wednesday 17 September 2025 00:44:15 +0000 (0:00:00.488) 0:00:01.159 *** 2025-09-17 00:50:29.019662 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.019673 | orchestrator | 2025-09-17 00:50:29.019684 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-17 00:50:29.019694 | orchestrator | Wednesday 17 September 2025 00:44:16 +0000 (0:00:00.602) 0:00:01.762 *** 2025-09-17 00:50:29.019705 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.019716 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.019727 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.019737 | orchestrator | 2025-09-17 00:50:29.019748 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-17 00:50:29.019759 | orchestrator | Wednesday 17 September 2025 00:44:17 +0000 (0:00:01.618) 0:00:03.380 *** 2025-09-17 00:50:29.019770 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.019780 | orchestrator | 2025-09-17 00:50:29.019791 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-17 00:50:29.019802 | orchestrator | Wednesday 17 September 2025 00:44:18 +0000 (0:00:00.766) 0:00:04.147 *** 2025-09-17 00:50:29.019812 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.019823 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.019834 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.019845 | orchestrator | 2025-09-17 00:50:29.019855 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-17 00:50:29.019916 | orchestrator | Wednesday 17 September 2025 00:44:19 +0000 (0:00:00.702) 0:00:04.849 *** 2025-09-17 00:50:29.019929 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-17 00:50:29.019940 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-17 00:50:29.019951 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-17 00:50:29.019993 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-17 00:50:29.020005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-17 00:50:29.020033 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-17 00:50:29.020044 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-17 00:50:29.020084 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-17 00:50:29.020095 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-17 00:50:29.020159 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-17 00:50:29.020170 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-17 00:50:29.020181 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-17 00:50:29.020191 | orchestrator | 2025-09-17 00:50:29.020202 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-17 00:50:29.020256 | orchestrator | Wednesday 17 September 2025 00:44:23 +0000 (0:00:04.017) 0:00:08.867 *** 2025-09-17 00:50:29.020268 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-17 00:50:29.020279 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-17 00:50:29.020290 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-17 00:50:29.020301 | orchestrator | 2025-09-17 00:50:29.020312 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-17 00:50:29.020322 | orchestrator | Wednesday 17 September 2025 00:44:24 +0000 (0:00:00.902) 0:00:09.770 *** 2025-09-17 00:50:29.020333 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-17 00:50:29.020344 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-17 00:50:29.020355 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-17 00:50:29.020365 | orchestrator | 2025-09-17 00:50:29.020376 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-17 00:50:29.020387 | orchestrator | Wednesday 17 September 2025 00:44:25 +0000 (0:00:01.447) 0:00:11.218 *** 2025-09-17 00:50:29.020398 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-17 00:50:29.020409 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.020433 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-17 00:50:29.020444 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.020455 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-17 00:50:29.020466 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.020477 | orchestrator | 2025-09-17 00:50:29.020489 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-17 00:50:29.020500 | orchestrator | Wednesday 17 September 2025 00:44:26 +0000 (0:00:00.548) 0:00:11.766 *** 2025-09-17 00:50:29.020514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.020543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.020555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.020700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.020715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.020734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.020747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.020766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.020778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.020790 | orchestrator | 2025-09-17 00:50:29.020801 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-17 00:50:29.020812 | orchestrator | Wednesday 17 September 2025 00:44:28 +0000 (0:00:02.103) 0:00:13.870 *** 2025-09-17 00:50:29.020823 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.020834 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.020845 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.020855 | orchestrator | 2025-09-17 00:50:29.020866 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-17 00:50:29.020878 | orchestrator | Wednesday 17 September 2025 00:44:29 +0000 (0:00:00.930) 0:00:14.800 *** 2025-09-17 00:50:29.020888 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-17 00:50:29.020929 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-17 00:50:29.020941 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-17 00:50:29.020952 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-17 00:50:29.020962 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-17 00:50:29.020973 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-17 00:50:29.020984 | orchestrator | 2025-09-17 00:50:29.020995 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-17 00:50:29.021005 | orchestrator | Wednesday 17 September 2025 00:44:31 +0000 (0:00:02.537) 0:00:17.338 *** 2025-09-17 00:50:29.021016 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.021033 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.021044 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.021054 | orchestrator | 2025-09-17 00:50:29.021065 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-17 00:50:29.021076 | orchestrator | Wednesday 17 September 2025 00:44:33 +0000 (0:00:02.122) 0:00:19.461 *** 2025-09-17 00:50:29.021087 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.021097 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.021108 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.021119 | orchestrator | 2025-09-17 00:50:29.021129 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-17 00:50:29.021140 | orchestrator | Wednesday 17 September 2025 00:44:35 +0000 (0:00:01.594) 0:00:21.055 *** 2025-09-17 00:50:29.021210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.021233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.021254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.021267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 00:50:29.021280 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.021291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.021309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.021321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.021333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 00:50:29.021351 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.021440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.021454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.021465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.021477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 00:50:29.021488 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.021499 | orchestrator | 2025-09-17 00:50:29.021510 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-17 00:50:29.021521 | orchestrator | Wednesday 17 September 2025 00:44:36 +0000 (0:00:00.671) 0:00:21.726 *** 2025-09-17 00:50:29.021533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.021657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.021691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.021705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.021716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.021728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 00:50:29.021745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.021757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.021781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 00:50:29.021799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.021811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.021823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce', '__omit_place_holder__b5d71253ea6fb25b9b7f04b889e5bd45768a01ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 00:50:29.021834 | orchestrator | 2025-09-17 00:50:29.021846 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-17 00:50:29.021856 | orchestrator | Wednesday 17 September 2025 00:44:39 +0000 (0:00:03.279) 0:00:25.006 *** 2025-09-17 00:50:29.021868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.021884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.021956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.021979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.021992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.022003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.022015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.022107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.022129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.022140 | orchestrator | 2025-09-17 00:50:29.022151 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-17 00:50:29.022162 | orchestrator | Wednesday 17 September 2025 00:44:43 +0000 (0:00:03.793) 0:00:28.799 *** 2025-09-17 00:50:29.022173 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-17 00:50:29.022185 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-17 00:50:29.022196 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-17 00:50:29.022207 | orchestrator | 2025-09-17 00:50:29.022218 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-17 00:50:29.022229 | orchestrator | Wednesday 17 September 2025 00:44:45 +0000 (0:00:02.069) 0:00:30.868 *** 2025-09-17 00:50:29.022240 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-17 00:50:29.022311 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-17 00:50:29.022323 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-17 00:50:29.022375 | orchestrator | 2025-09-17 00:50:29.023827 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-17 00:50:29.023959 | orchestrator | Wednesday 17 September 2025 00:44:49 +0000 (0:00:04.314) 0:00:35.183 *** 2025-09-17 00:50:29.023988 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.024002 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.024013 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.024035 | orchestrator | 2025-09-17 00:50:29.024059 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-17 00:50:29.024071 | orchestrator | Wednesday 17 September 2025 00:44:50 +0000 (0:00:00.739) 0:00:35.922 *** 2025-09-17 00:50:29.024083 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-17 00:50:29.024096 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-17 00:50:29.024107 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-17 00:50:29.024117 | orchestrator | 2025-09-17 00:50:29.024128 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-17 00:50:29.024139 | orchestrator | Wednesday 17 September 2025 00:44:53 +0000 (0:00:02.907) 0:00:38.829 *** 2025-09-17 00:50:29.024151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-17 00:50:29.024162 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-17 00:50:29.024173 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-17 00:50:29.024184 | orchestrator | 2025-09-17 00:50:29.024195 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-17 00:50:29.024205 | orchestrator | Wednesday 17 September 2025 00:44:55 +0000 (0:00:02.661) 0:00:41.491 *** 2025-09-17 00:50:29.024217 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-17 00:50:29.024253 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-17 00:50:29.024265 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-17 00:50:29.024277 | orchestrator | 2025-09-17 00:50:29.024290 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-17 00:50:29.024302 | orchestrator | Wednesday 17 September 2025 00:44:57 +0000 (0:00:01.536) 0:00:43.028 *** 2025-09-17 00:50:29.024315 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-17 00:50:29.024339 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-17 00:50:29.024354 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-17 00:50:29.024367 | orchestrator | 2025-09-17 00:50:29.024379 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-17 00:50:29.024392 | orchestrator | Wednesday 17 September 2025 00:44:59 +0000 (0:00:01.762) 0:00:44.790 *** 2025-09-17 00:50:29.024405 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.024429 | orchestrator | 2025-09-17 00:50:29.024453 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-17 00:50:29.024477 | orchestrator | Wednesday 17 September 2025 00:45:00 +0000 (0:00:01.517) 0:00:46.308 *** 2025-09-17 00:50:29.024525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.024553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.024588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.024603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.024618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.024638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.024667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.024690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.024702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.024714 | orchestrator | 2025-09-17 00:50:29.024725 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-17 00:50:29.024736 | orchestrator | Wednesday 17 September 2025 00:45:04 +0000 (0:00:04.149) 0:00:50.458 *** 2025-09-17 00:50:29.024757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.024770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.024788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.024800 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.024811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.024827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.024839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.024851 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.024862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.024881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.024931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.024944 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.024955 | orchestrator | 2025-09-17 00:50:29.024966 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-17 00:50:29.024977 | orchestrator | Wednesday 17 September 2025 00:45:05 +0000 (0:00:00.553) 0:00:51.011 *** 2025-09-17 00:50:29.024988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025066 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.025078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025089 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.025100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025153 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.025186 | orchestrator | 2025-09-17 00:50:29.025198 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-17 00:50:29.025209 | orchestrator | Wednesday 17 September 2025 00:45:06 +0000 (0:00:00.854) 0:00:51.866 *** 2025-09-17 00:50:29.025220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025271 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.025282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025376 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.025403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025452 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.025463 | orchestrator | 2025-09-17 00:50:29.025474 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-17 00:50:29.025485 | orchestrator | Wednesday 17 September 2025 00:45:07 +0000 (0:00:00.777) 0:00:52.644 *** 2025-09-17 00:50:29.025497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025532 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.025553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025594 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.025612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025648 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.025659 | orchestrator | 2025-09-17 00:50:29.025670 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-17 00:50:29.025681 | orchestrator | Wednesday 17 September 2025 00:45:07 +0000 (0:00:00.824) 0:00:53.468 *** 2025-09-17 00:50:29.025693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025740 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.025757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025792 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.025804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.025850 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.025861 | orchestrator | 2025-09-17 00:50:29.025872 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-17 00:50:29.025883 | orchestrator | Wednesday 17 September 2025 00:45:08 +0000 (0:00:00.892) 0:00:54.361 *** 2025-09-17 00:50:29.025958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.025981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.025994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.026006 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.026053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.026070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.026088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.026107 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.026118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.026137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.026150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.026161 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.026172 | orchestrator | 2025-09-17 00:50:29.026183 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-17 00:50:29.026194 | orchestrator | Wednesday 17 September 2025 00:45:09 +0000 (0:00:01.047) 0:00:55.409 *** 2025-09-17 00:50:29.026205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.026216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.026228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.026250 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.026261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.026273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.026293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.026305 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.026316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.026328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.026339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.026357 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.026368 | orchestrator | 2025-09-17 00:50:29.026379 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-17 00:50:29.026403 | orchestrator | Wednesday 17 September 2025 00:45:10 +0000 (0:00:00.640) 0:00:56.049 *** 2025-09-17 00:50:29.026440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.026452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.026462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.026472 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.026488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.026499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.026510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.026526 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.026536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 00:50:29.026550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 00:50:29.026561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 00:50:29.026571 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.026581 | orchestrator | 2025-09-17 00:50:29.026591 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-17 00:50:29.026601 | orchestrator | Wednesday 17 September 2025 00:45:11 +0000 (0:00:00.711) 0:00:56.761 *** 2025-09-17 00:50:29.026611 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-17 00:50:29.026621 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-17 00:50:29.026636 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-17 00:50:29.026657 | orchestrator | 2025-09-17 00:50:29.026667 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-17 00:50:29.026677 | orchestrator | Wednesday 17 September 2025 00:45:13 +0000 (0:00:02.243) 0:00:59.004 *** 2025-09-17 00:50:29.026687 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-17 00:50:29.026697 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-17 00:50:29.026706 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-17 00:50:29.026716 | orchestrator | 2025-09-17 00:50:29.026726 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-17 00:50:29.026735 | orchestrator | Wednesday 17 September 2025 00:45:15 +0000 (0:00:01.867) 0:01:00.872 *** 2025-09-17 00:50:29.026745 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 00:50:29.026755 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 00:50:29.026765 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 00:50:29.026781 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 00:50:29.026791 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.026800 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 00:50:29.026810 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.026819 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 00:50:29.026829 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.026839 | orchestrator | 2025-09-17 00:50:29.026848 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-17 00:50:29.026858 | orchestrator | Wednesday 17 September 2025 00:45:16 +0000 (0:00:01.682) 0:01:02.554 *** 2025-09-17 00:50:29.026868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.026886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.026916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-17 00:50:29.026933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.026944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.026961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 00:50:29.026971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.026982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.026992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 00:50:29.027002 | orchestrator | 2025-09-17 00:50:29.027023 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-17 00:50:29.027033 | orchestrator | Wednesday 17 September 2025 00:45:20 +0000 (0:00:03.816) 0:01:06.371 *** 2025-09-17 00:50:29.027043 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.027053 | orchestrator | 2025-09-17 00:50:29.027063 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-17 00:50:29.027072 | orchestrator | Wednesday 17 September 2025 00:45:21 +0000 (0:00:00.573) 0:01:06.945 *** 2025-09-17 00:50:29.027109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-17 00:50:29.027127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.027145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-17 00:50:29.027181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.027191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-17 00:50:29.027243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.027274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027321 | orchestrator | 2025-09-17 00:50:29.027341 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-17 00:50:29.027351 | orchestrator | Wednesday 17 September 2025 00:45:26 +0000 (0:00:04.795) 0:01:11.740 *** 2025-09-17 00:50:29.027361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-17 00:50:29.027383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.027394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027415 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.027425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-17 00:50:29.027440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.027450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027477 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.027494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-17 00:50:29.027505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.027515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027540 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.027551 | orchestrator | 2025-09-17 00:50:29.027561 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-17 00:50:29.027571 | orchestrator | Wednesday 17 September 2025 00:45:27 +0000 (0:00:01.021) 0:01:12.762 *** 2025-09-17 00:50:29.027581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-17 00:50:29.027592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-17 00:50:29.027602 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.027612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-17 00:50:29.027627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-17 00:50:29.027638 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.027648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-17 00:50:29.027657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-17 00:50:29.027668 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.027677 | orchestrator | 2025-09-17 00:50:29.027693 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-17 00:50:29.027703 | orchestrator | Wednesday 17 September 2025 00:45:28 +0000 (0:00:01.135) 0:01:13.898 *** 2025-09-17 00:50:29.027713 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.027722 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.027732 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.027741 | orchestrator | 2025-09-17 00:50:29.027751 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-17 00:50:29.027761 | orchestrator | Wednesday 17 September 2025 00:45:30 +0000 (0:00:02.007) 0:01:15.905 *** 2025-09-17 00:50:29.027771 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.027780 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.027790 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.027799 | orchestrator | 2025-09-17 00:50:29.027809 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-17 00:50:29.027819 | orchestrator | Wednesday 17 September 2025 00:45:32 +0000 (0:00:01.710) 0:01:17.615 *** 2025-09-17 00:50:29.027828 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.027838 | orchestrator | 2025-09-17 00:50:29.027848 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-17 00:50:29.027857 | orchestrator | Wednesday 17 September 2025 00:45:32 +0000 (0:00:00.649) 0:01:18.265 *** 2025-09-17 00:50:29.027868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.027883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.027918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.027967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.027981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.028002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.028012 | orchestrator | 2025-09-17 00:50:29.028022 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-17 00:50:29.028032 | orchestrator | Wednesday 17 September 2025 00:45:36 +0000 (0:00:04.017) 0:01:22.282 *** 2025-09-17 00:50:29.028049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.028060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.028070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.028080 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.028094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.028111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.028121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.028131 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.028147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.028158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.028168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.028184 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.028194 | orchestrator | 2025-09-17 00:50:29.028204 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-17 00:50:29.028214 | orchestrator | Wednesday 17 September 2025 00:45:37 +0000 (0:00:00.663) 0:01:22.945 *** 2025-09-17 00:50:29.028224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 00:50:29.028234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 00:50:29.028248 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.028258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 00:50:29.028268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 00:50:29.028278 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.028288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 00:50:29.028298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 00:50:29.028308 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.028317 | orchestrator | 2025-09-17 00:50:29.028340 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-17 00:50:29.028370 | orchestrator | Wednesday 17 September 2025 00:45:38 +0000 (0:00:00.891) 0:01:23.837 *** 2025-09-17 00:50:29.028380 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.028390 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.028410 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.028430 | orchestrator | 2025-09-17 00:50:29.028440 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-17 00:50:29.028450 | orchestrator | Wednesday 17 September 2025 00:45:39 +0000 (0:00:01.304) 0:01:25.141 *** 2025-09-17 00:50:29.028460 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.028469 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.028479 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.028489 | orchestrator | 2025-09-17 00:50:29.028504 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-17 00:50:29.028514 | orchestrator | Wednesday 17 September 2025 00:45:41 +0000 (0:00:01.944) 0:01:27.086 *** 2025-09-17 00:50:29.028524 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.028534 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.028543 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.028553 | orchestrator | 2025-09-17 00:50:29.028563 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-17 00:50:29.028573 | orchestrator | Wednesday 17 September 2025 00:45:41 +0000 (0:00:00.277) 0:01:27.363 *** 2025-09-17 00:50:29.028582 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.028592 | orchestrator | 2025-09-17 00:50:29.028602 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-17 00:50:29.028611 | orchestrator | Wednesday 17 September 2025 00:45:42 +0000 (0:00:00.722) 0:01:28.085 *** 2025-09-17 00:50:29.028622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-17 00:50:29.028639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-17 00:50:29.028654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-17 00:50:29.028665 | orchestrator | 2025-09-17 00:50:29.028674 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-17 00:50:29.028684 | orchestrator | Wednesday 17 September 2025 00:45:44 +0000 (0:00:02.270) 0:01:30.356 *** 2025-09-17 00:50:29.028700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-17 00:50:29.028711 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.028721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-17 00:50:29.028737 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.028747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-17 00:50:29.028757 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.028767 | orchestrator | 2025-09-17 00:50:29.028777 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-17 00:50:29.028786 | orchestrator | Wednesday 17 September 2025 00:45:46 +0000 (0:00:01.347) 0:01:31.704 *** 2025-09-17 00:50:29.028797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 00:50:29.028813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 00:50:29.028825 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.028835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 00:50:29.028845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 00:50:29.028856 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.028871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 00:50:29.028888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 00:50:29.028944 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.028955 | orchestrator | 2025-09-17 00:50:29.028965 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-17 00:50:29.028975 | orchestrator | Wednesday 17 September 2025 00:45:47 +0000 (0:00:01.504) 0:01:33.208 *** 2025-09-17 00:50:29.028984 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.028994 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.029004 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.029013 | orchestrator | 2025-09-17 00:50:29.029023 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-17 00:50:29.029033 | orchestrator | Wednesday 17 September 2025 00:45:48 +0000 (0:00:00.603) 0:01:33.812 *** 2025-09-17 00:50:29.029042 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.029052 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.029062 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.029071 | orchestrator | 2025-09-17 00:50:29.029081 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-17 00:50:29.029091 | orchestrator | Wednesday 17 September 2025 00:45:49 +0000 (0:00:01.081) 0:01:34.893 *** 2025-09-17 00:50:29.029100 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.029122 | orchestrator | 2025-09-17 00:50:29.029142 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-17 00:50:29.029152 | orchestrator | Wednesday 17 September 2025 00:45:50 +0000 (0:00:00.791) 0:01:35.685 *** 2025-09-17 00:50:29.029162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.029187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.029259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.029318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029347 | orchestrator | 2025-09-17 00:50:29.029355 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-17 00:50:29.029367 | orchestrator | Wednesday 17 September 2025 00:45:54 +0000 (0:00:04.284) 0:01:39.970 *** 2025-09-17 00:50:29.029376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.029394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029425 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.029434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.029446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029483 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.029491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.029499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029533 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.029541 | orchestrator | 2025-09-17 00:50:29.029549 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-17 00:50:29.029557 | orchestrator | Wednesday 17 September 2025 00:45:55 +0000 (0:00:00.848) 0:01:40.819 *** 2025-09-17 00:50:29.029565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 00:50:29.029579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 00:50:29.029587 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.029596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 00:50:29.029604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 00:50:29.029612 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.029620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 00:50:29.029629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 00:50:29.029637 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.029645 | orchestrator | 2025-09-17 00:50:29.029653 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-17 00:50:29.029661 | orchestrator | Wednesday 17 September 2025 00:45:56 +0000 (0:00:00.940) 0:01:41.759 *** 2025-09-17 00:50:29.029669 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.029677 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.029684 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.029692 | orchestrator | 2025-09-17 00:50:29.029700 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-17 00:50:29.029708 | orchestrator | Wednesday 17 September 2025 00:45:57 +0000 (0:00:01.268) 0:01:43.028 *** 2025-09-17 00:50:29.029716 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.029724 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.029732 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.029739 | orchestrator | 2025-09-17 00:50:29.029747 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-17 00:50:29.029755 | orchestrator | Wednesday 17 September 2025 00:45:59 +0000 (0:00:01.837) 0:01:44.866 *** 2025-09-17 00:50:29.029763 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.029771 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.029784 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.029792 | orchestrator | 2025-09-17 00:50:29.029800 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-17 00:50:29.029808 | orchestrator | Wednesday 17 September 2025 00:45:59 +0000 (0:00:00.392) 0:01:45.258 *** 2025-09-17 00:50:29.029816 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.029824 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.029831 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.029839 | orchestrator | 2025-09-17 00:50:29.029847 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-17 00:50:29.029855 | orchestrator | Wednesday 17 September 2025 00:45:59 +0000 (0:00:00.273) 0:01:45.532 *** 2025-09-17 00:50:29.029863 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.029871 | orchestrator | 2025-09-17 00:50:29.029882 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-17 00:50:29.029891 | orchestrator | Wednesday 17 September 2025 00:46:00 +0000 (0:00:00.690) 0:01:46.222 *** 2025-09-17 00:50:29.029914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:50:29.029929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:50:29.029939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.029981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:50:29.029989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:50:29.030012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:50:29.030073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:50:29.030105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030173 | orchestrator | 2025-09-17 00:50:29.030181 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-17 00:50:29.030190 | orchestrator | Wednesday 17 September 2025 00:46:04 +0000 (0:00:03.777) 0:01:50.000 *** 2025-09-17 00:50:29.030204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:50:29.030213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:50:29.030226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030273 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.030281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:50:29.030310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:50:29.030319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030374 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.030383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:50:29.030391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:50:29.030403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.030456 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.030464 | orchestrator | 2025-09-17 00:50:29.030472 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-17 00:50:29.030480 | orchestrator | Wednesday 17 September 2025 00:46:05 +0000 (0:00:00.812) 0:01:50.813 *** 2025-09-17 00:50:29.030489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-17 00:50:29.030497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-17 00:50:29.030505 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.030513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-17 00:50:29.030522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-17 00:50:29.030530 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.030537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-17 00:50:29.030546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-17 00:50:29.030557 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.030576 | orchestrator | 2025-09-17 00:50:29.030584 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-17 00:50:29.030592 | orchestrator | Wednesday 17 September 2025 00:46:06 +0000 (0:00:00.956) 0:01:51.769 *** 2025-09-17 00:50:29.030600 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.030608 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.030616 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.030624 | orchestrator | 2025-09-17 00:50:29.030632 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-17 00:50:29.030640 | orchestrator | Wednesday 17 September 2025 00:46:07 +0000 (0:00:01.761) 0:01:53.531 *** 2025-09-17 00:50:29.030647 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.030655 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.030663 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.030671 | orchestrator | 2025-09-17 00:50:29.030678 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-17 00:50:29.030686 | orchestrator | Wednesday 17 September 2025 00:46:09 +0000 (0:00:01.767) 0:01:55.298 *** 2025-09-17 00:50:29.030694 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.030702 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.030714 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.030722 | orchestrator | 2025-09-17 00:50:29.030730 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-17 00:50:29.030738 | orchestrator | Wednesday 17 September 2025 00:46:10 +0000 (0:00:00.489) 0:01:55.788 *** 2025-09-17 00:50:29.030746 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.030753 | orchestrator | 2025-09-17 00:50:29.030761 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-17 00:50:29.030769 | orchestrator | Wednesday 17 September 2025 00:46:10 +0000 (0:00:00.797) 0:01:56.585 *** 2025-09-17 00:50:29.030786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 00:50:29.030801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.030823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 00:50:29.030836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.030851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 00:50:29.030866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.030875 | orchestrator | 2025-09-17 00:50:29.030883 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-17 00:50:29.030891 | orchestrator | Wednesday 17 September 2025 00:46:15 +0000 (0:00:04.123) 0:02:00.708 *** 2025-09-17 00:50:29.030923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 00:50:29.030938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.030947 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.030960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 00:50:29.030981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.030989 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.031006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 00:50:29.031027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.031036 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.031044 | orchestrator | 2025-09-17 00:50:29.031053 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-17 00:50:29.031061 | orchestrator | Wednesday 17 September 2025 00:46:18 +0000 (0:00:02.968) 0:02:03.677 *** 2025-09-17 00:50:29.031069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 00:50:29.031078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 00:50:29.031086 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.031099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 00:50:29.031112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 00:50:29.031120 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.031129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 00:50:29.031143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 00:50:29.031152 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.031160 | orchestrator | 2025-09-17 00:50:29.031168 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-17 00:50:29.031176 | orchestrator | Wednesday 17 September 2025 00:46:21 +0000 (0:00:03.178) 0:02:06.856 *** 2025-09-17 00:50:29.031184 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.031192 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.031200 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.031208 | orchestrator | 2025-09-17 00:50:29.031216 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-17 00:50:29.031223 | orchestrator | Wednesday 17 September 2025 00:46:22 +0000 (0:00:01.297) 0:02:08.153 *** 2025-09-17 00:50:29.031231 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.031239 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.031247 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.031255 | orchestrator | 2025-09-17 00:50:29.031263 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-17 00:50:29.031271 | orchestrator | Wednesday 17 September 2025 00:46:24 +0000 (0:00:02.086) 0:02:10.239 *** 2025-09-17 00:50:29.031279 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.031287 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.031295 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.031302 | orchestrator | 2025-09-17 00:50:29.031310 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-17 00:50:29.031318 | orchestrator | Wednesday 17 September 2025 00:46:25 +0000 (0:00:00.509) 0:02:10.749 *** 2025-09-17 00:50:29.031326 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.031334 | orchestrator | 2025-09-17 00:50:29.031342 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-17 00:50:29.031350 | orchestrator | Wednesday 17 September 2025 00:46:25 +0000 (0:00:00.827) 0:02:11.577 *** 2025-09-17 00:50:29.031362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 00:50:29.031374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 00:50:29.031383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 00:50:29.031392 | orchestrator | 2025-09-17 00:50:29.031400 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-17 00:50:29.031408 | orchestrator | Wednesday 17 September 2025 00:46:29 +0000 (0:00:03.380) 0:02:14.957 *** 2025-09-17 00:50:29.031423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 00:50:29.031432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 00:50:29.031440 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.031448 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.031457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 00:50:29.031470 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.031478 | orchestrator | 2025-09-17 00:50:29.031486 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-17 00:50:29.031494 | orchestrator | Wednesday 17 September 2025 00:46:30 +0000 (0:00:00.671) 0:02:15.629 *** 2025-09-17 00:50:29.031502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-17 00:50:29.031510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-17 00:50:29.031518 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.031530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-17 00:50:29.031538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-17 00:50:29.031546 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.031554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-17 00:50:29.031562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-17 00:50:29.031570 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.031578 | orchestrator | 2025-09-17 00:50:29.031586 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-17 00:50:29.031594 | orchestrator | Wednesday 17 September 2025 00:46:30 +0000 (0:00:00.658) 0:02:16.287 *** 2025-09-17 00:50:29.031602 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.031610 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.031618 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.031626 | orchestrator | 2025-09-17 00:50:29.031634 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-17 00:50:29.031642 | orchestrator | Wednesday 17 September 2025 00:46:32 +0000 (0:00:01.362) 0:02:17.650 *** 2025-09-17 00:50:29.031649 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.031658 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.031666 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.031673 | orchestrator | 2025-09-17 00:50:29.031681 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-17 00:50:29.031689 | orchestrator | Wednesday 17 September 2025 00:46:34 +0000 (0:00:02.422) 0:02:20.072 *** 2025-09-17 00:50:29.031697 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.031705 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.031718 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.031726 | orchestrator | 2025-09-17 00:50:29.031735 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-17 00:50:29.031743 | orchestrator | Wednesday 17 September 2025 00:46:35 +0000 (0:00:00.529) 0:02:20.602 *** 2025-09-17 00:50:29.031755 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.031763 | orchestrator | 2025-09-17 00:50:29.031771 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-17 00:50:29.031779 | orchestrator | Wednesday 17 September 2025 00:46:35 +0000 (0:00:00.920) 0:02:21.523 *** 2025-09-17 00:50:29.031792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:50:29.031809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:50:29.031827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:50:29.031837 | orchestrator | 2025-09-17 00:50:29.031845 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-17 00:50:29.031853 | orchestrator | Wednesday 17 September 2025 00:46:39 +0000 (0:00:03.802) 0:02:25.325 *** 2025-09-17 00:50:29.031868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 00:50:29.031882 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.031931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 00:50:29.031942 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.032363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 00:50:29.032418 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.032430 | orchestrator | 2025-09-17 00:50:29.032441 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-17 00:50:29.032452 | orchestrator | Wednesday 17 September 2025 00:46:41 +0000 (0:00:01.336) 0:02:26.662 *** 2025-09-17 00:50:29.032463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 00:50:29.032477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 00:50:29.032497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 00:50:29.032508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 00:50:29.032518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-17 00:50:29.032528 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.032536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 00:50:29.032545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 00:50:29.032557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 00:50:29.032574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 00:50:29.032583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 00:50:29.032592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 00:50:29.032600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 00:50:29.032608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 00:50:29.032616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-17 00:50:29.032624 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.032632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-17 00:50:29.032640 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.032648 | orchestrator | 2025-09-17 00:50:29.032656 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-17 00:50:29.032664 | orchestrator | Wednesday 17 September 2025 00:46:42 +0000 (0:00:00.957) 0:02:27.620 *** 2025-09-17 00:50:29.032673 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.032680 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.032688 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.032696 | orchestrator | 2025-09-17 00:50:29.032704 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-17 00:50:29.032711 | orchestrator | Wednesday 17 September 2025 00:46:43 +0000 (0:00:01.384) 0:02:29.004 *** 2025-09-17 00:50:29.032722 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.032730 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.032738 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.032746 | orchestrator | 2025-09-17 00:50:29.032754 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-17 00:50:29.032761 | orchestrator | Wednesday 17 September 2025 00:46:45 +0000 (0:00:02.111) 0:02:31.116 *** 2025-09-17 00:50:29.032798 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.032806 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.032813 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.032821 | orchestrator | 2025-09-17 00:50:29.032829 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-17 00:50:29.032837 | orchestrator | Wednesday 17 September 2025 00:46:45 +0000 (0:00:00.326) 0:02:31.442 *** 2025-09-17 00:50:29.032845 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.032853 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.032860 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.032868 | orchestrator | 2025-09-17 00:50:29.032876 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-17 00:50:29.032884 | orchestrator | Wednesday 17 September 2025 00:46:46 +0000 (0:00:00.750) 0:02:32.192 *** 2025-09-17 00:50:29.032892 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.032921 | orchestrator | 2025-09-17 00:50:29.032930 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-17 00:50:29.032938 | orchestrator | Wednesday 17 September 2025 00:46:47 +0000 (0:00:00.966) 0:02:33.159 *** 2025-09-17 00:50:29.032954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:50:29.032965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:50:29.032975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:50:29.032989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:50:29.033003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:50:29.033013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:50:29.033026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:50:29.033036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:50:29.033044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:50:29.033056 | orchestrator | 2025-09-17 00:50:29.033065 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-17 00:50:29.033074 | orchestrator | Wednesday 17 September 2025 00:46:51 +0000 (0:00:03.810) 0:02:36.969 *** 2025-09-17 00:50:29.033086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 00:50:29.033095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:50:29.033110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:50:29.033119 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.033127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 00:50:29.033136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:50:29.033152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:50:29.033160 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.033169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 00:50:29.033184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:50:29.033192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:50:29.033200 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.033208 | orchestrator | 2025-09-17 00:50:29.033217 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-17 00:50:29.033225 | orchestrator | Wednesday 17 September 2025 00:46:52 +0000 (0:00:00.628) 0:02:37.598 *** 2025-09-17 00:50:29.033233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 00:50:29.033243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 00:50:29.033255 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.033263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 00:50:29.033272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 00:50:29.033280 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.033288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 00:50:29.033299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 00:50:29.033307 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.033315 | orchestrator | 2025-09-17 00:50:29.033323 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-17 00:50:29.033331 | orchestrator | Wednesday 17 September 2025 00:46:52 +0000 (0:00:00.860) 0:02:38.458 *** 2025-09-17 00:50:29.033339 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.033347 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.033354 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.033362 | orchestrator | 2025-09-17 00:50:29.033370 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-17 00:50:29.033378 | orchestrator | Wednesday 17 September 2025 00:46:54 +0000 (0:00:01.324) 0:02:39.783 *** 2025-09-17 00:50:29.033386 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.033394 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.033402 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.033410 | orchestrator | 2025-09-17 00:50:29.033418 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-17 00:50:29.033425 | orchestrator | Wednesday 17 September 2025 00:46:56 +0000 (0:00:02.494) 0:02:42.277 *** 2025-09-17 00:50:29.033433 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.033441 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.033449 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.033457 | orchestrator | 2025-09-17 00:50:29.033465 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-17 00:50:29.033473 | orchestrator | Wednesday 17 September 2025 00:46:57 +0000 (0:00:00.701) 0:02:42.979 *** 2025-09-17 00:50:29.033481 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.033488 | orchestrator | 2025-09-17 00:50:29.033496 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-17 00:50:29.033504 | orchestrator | Wednesday 17 September 2025 00:46:58 +0000 (0:00:01.017) 0:02:43.997 *** 2025-09-17 00:50:29.033519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 00:50:29.033533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.033542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 00:50:29.033557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.033566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 00:50:29.033580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.033597 | orchestrator | 2025-09-17 00:50:29.033605 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-17 00:50:29.033613 | orchestrator | Wednesday 17 September 2025 00:47:02 +0000 (0:00:04.288) 0:02:48.285 *** 2025-09-17 00:50:29.033621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 00:50:29.033633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.033642 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.033650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 00:50:29.033663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.033671 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.033684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 00:50:29.033692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.033700 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.033708 | orchestrator | 2025-09-17 00:50:29.033716 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-17 00:50:29.033724 | orchestrator | Wednesday 17 September 2025 00:47:03 +0000 (0:00:01.193) 0:02:49.479 *** 2025-09-17 00:50:29.033733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-17 00:50:29.033742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-17 00:50:29.033751 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.033759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-17 00:50:29.033768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-17 00:50:29.033776 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.033784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-17 00:50:29.033792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-17 00:50:29.033800 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.033808 | orchestrator | 2025-09-17 00:50:29.033816 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-17 00:50:29.033824 | orchestrator | Wednesday 17 September 2025 00:47:04 +0000 (0:00:00.956) 0:02:50.435 *** 2025-09-17 00:50:29.033832 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.033839 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.033847 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.033855 | orchestrator | 2025-09-17 00:50:29.033867 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-17 00:50:29.033875 | orchestrator | Wednesday 17 September 2025 00:47:06 +0000 (0:00:01.316) 0:02:51.752 *** 2025-09-17 00:50:29.033883 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.033891 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.033914 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.033922 | orchestrator | 2025-09-17 00:50:29.033930 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-17 00:50:29.033938 | orchestrator | Wednesday 17 September 2025 00:47:08 +0000 (0:00:02.246) 0:02:53.999 *** 2025-09-17 00:50:29.033950 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.033958 | orchestrator | 2025-09-17 00:50:29.033966 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-17 00:50:29.033974 | orchestrator | Wednesday 17 September 2025 00:47:09 +0000 (0:00:01.268) 0:02:55.267 *** 2025-09-17 00:50:29.033983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-17 00:50:29.034008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-17 00:50:29.034082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-17 00:50:29.034161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034196 | orchestrator | 2025-09-17 00:50:29.034204 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-17 00:50:29.034212 | orchestrator | Wednesday 17 September 2025 00:47:13 +0000 (0:00:03.903) 0:02:59.171 *** 2025-09-17 00:50:29.034220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-17 00:50:29.034229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034261 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.034269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-17 00:50:29.034294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034319 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.034327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-17 00:50:29.034339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.034375 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.034383 | orchestrator | 2025-09-17 00:50:29.034391 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-17 00:50:29.034399 | orchestrator | Wednesday 17 September 2025 00:47:14 +0000 (0:00:00.677) 0:02:59.849 *** 2025-09-17 00:50:29.034407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-17 00:50:29.034415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-17 00:50:29.034423 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.034431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-17 00:50:29.034439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-17 00:50:29.034447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-17 00:50:29.034456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-17 00:50:29.034463 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.034471 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.034479 | orchestrator | 2025-09-17 00:50:29.034487 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-17 00:50:29.034495 | orchestrator | Wednesday 17 September 2025 00:47:15 +0000 (0:00:01.515) 0:03:01.364 *** 2025-09-17 00:50:29.034502 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.034510 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.034522 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.034530 | orchestrator | 2025-09-17 00:50:29.034538 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-17 00:50:29.034546 | orchestrator | Wednesday 17 September 2025 00:47:17 +0000 (0:00:01.431) 0:03:02.796 *** 2025-09-17 00:50:29.034553 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.034561 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.034569 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.034576 | orchestrator | 2025-09-17 00:50:29.034584 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-17 00:50:29.034608 | orchestrator | Wednesday 17 September 2025 00:47:19 +0000 (0:00:02.090) 0:03:04.887 *** 2025-09-17 00:50:29.034616 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.034624 | orchestrator | 2025-09-17 00:50:29.034632 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-17 00:50:29.034646 | orchestrator | Wednesday 17 September 2025 00:47:20 +0000 (0:00:01.341) 0:03:06.229 *** 2025-09-17 00:50:29.034655 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 00:50:29.034663 | orchestrator | 2025-09-17 00:50:29.034671 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-17 00:50:29.034678 | orchestrator | Wednesday 17 September 2025 00:47:23 +0000 (0:00:02.867) 0:03:09.096 *** 2025-09-17 00:50:29.034693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:50:29.034703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 00:50:29.034712 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.034724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:50:29.034737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 00:50:29.034746 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.034761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:50:29.034774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 00:50:29.034783 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.034791 | orchestrator | 2025-09-17 00:50:29.034799 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-17 00:50:29.034807 | orchestrator | Wednesday 17 September 2025 00:47:25 +0000 (0:00:02.103) 0:03:11.200 *** 2025-09-17 00:50:29.034818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:50:29.034833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 00:50:29.034842 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.034850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:50:29.034866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 00:50:29.034875 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.034889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:50:29.034951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 00:50:29.034967 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.034974 | orchestrator | 2025-09-17 00:50:29.034981 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-17 00:50:29.034988 | orchestrator | Wednesday 17 September 2025 00:47:27 +0000 (0:00:02.208) 0:03:13.409 *** 2025-09-17 00:50:29.034995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 00:50:29.035005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 00:50:29.035012 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.035019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 00:50:29.035027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 00:50:29.035034 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.035046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 00:50:29.035057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 00:50:29.035064 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.035070 | orchestrator | 2025-09-17 00:50:29.035077 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-17 00:50:29.035083 | orchestrator | Wednesday 17 September 2025 00:47:30 +0000 (0:00:02.813) 0:03:16.222 *** 2025-09-17 00:50:29.035090 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.035097 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.035103 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.035110 | orchestrator | 2025-09-17 00:50:29.035117 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-17 00:50:29.035123 | orchestrator | Wednesday 17 September 2025 00:47:32 +0000 (0:00:01.867) 0:03:18.090 *** 2025-09-17 00:50:29.035130 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.035136 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.035143 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.035150 | orchestrator | 2025-09-17 00:50:29.035156 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-17 00:50:29.035163 | orchestrator | Wednesday 17 September 2025 00:47:33 +0000 (0:00:01.365) 0:03:19.455 *** 2025-09-17 00:50:29.035169 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.035176 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.035182 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.035189 | orchestrator | 2025-09-17 00:50:29.035196 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-17 00:50:29.035202 | orchestrator | Wednesday 17 September 2025 00:47:34 +0000 (0:00:00.305) 0:03:19.761 *** 2025-09-17 00:50:29.035209 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.035215 | orchestrator | 2025-09-17 00:50:29.035222 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-17 00:50:29.035229 | orchestrator | Wednesday 17 September 2025 00:47:35 +0000 (0:00:01.302) 0:03:21.064 *** 2025-09-17 00:50:29.035238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-17 00:50:29.035247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-17 00:50:29.035262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-17 00:50:29.035270 | orchestrator | 2025-09-17 00:50:29.035276 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-17 00:50:29.035283 | orchestrator | Wednesday 17 September 2025 00:47:37 +0000 (0:00:01.557) 0:03:22.621 *** 2025-09-17 00:50:29.035290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-17 00:50:29.035297 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.035304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-17 00:50:29.035311 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.035321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-17 00:50:29.035328 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.035335 | orchestrator | 2025-09-17 00:50:29.035341 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-17 00:50:29.035348 | orchestrator | Wednesday 17 September 2025 00:47:37 +0000 (0:00:00.440) 0:03:23.062 *** 2025-09-17 00:50:29.035355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-17 00:50:29.035366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-17 00:50:29.035373 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.035380 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.035391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-17 00:50:29.035398 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.035405 | orchestrator | 2025-09-17 00:50:29.035411 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-17 00:50:29.035418 | orchestrator | Wednesday 17 September 2025 00:47:38 +0000 (0:00:00.836) 0:03:23.898 *** 2025-09-17 00:50:29.035425 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.035431 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.035438 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.035444 | orchestrator | 2025-09-17 00:50:29.035451 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-17 00:50:29.035457 | orchestrator | Wednesday 17 September 2025 00:47:38 +0000 (0:00:00.435) 0:03:24.333 *** 2025-09-17 00:50:29.035464 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.035471 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.035477 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.035484 | orchestrator | 2025-09-17 00:50:29.035490 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-17 00:50:29.035497 | orchestrator | Wednesday 17 September 2025 00:47:39 +0000 (0:00:01.218) 0:03:25.552 *** 2025-09-17 00:50:29.035503 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.035510 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.035517 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.035523 | orchestrator | 2025-09-17 00:50:29.035530 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-17 00:50:29.035536 | orchestrator | Wednesday 17 September 2025 00:47:40 +0000 (0:00:00.315) 0:03:25.868 *** 2025-09-17 00:50:29.035543 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.035550 | orchestrator | 2025-09-17 00:50:29.035556 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-17 00:50:29.035563 | orchestrator | Wednesday 17 September 2025 00:47:41 +0000 (0:00:01.468) 0:03:27.337 *** 2025-09-17 00:50:29.035570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 00:50:29.035580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 00:50:29.035620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 00:50:29.035642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.035649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.035669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 00:50:29.035703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.035715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 00:50:29.035738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.035752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.035760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.035767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.035787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.035807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.035815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 00:50:29.035833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.035841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 00:50:29.035848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.035880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kol2025-09-17 00:50:29 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:29.035888 | orchestrator | 2025-09-17 00:50:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:29.035910 | orchestrator | la_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.035925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 00:50:29.035946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.035965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.035972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.035979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.036098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.036142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.036155 | orchestrator | 2025-09-17 00:50:29.036162 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-17 00:50:29.036169 | orchestrator | Wednesday 17 September 2025 00:47:46 +0000 (0:00:04.351) 0:03:31.688 *** 2025-09-17 00:50:29.036179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 00:50:29.036190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 00:50:29.036223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 00:50:29.036244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.036305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 00:50:29.036312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.036392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.036402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.036410 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.036417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 00:50:29.036441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.036486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.036500 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.036511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 00:50:29.036518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.036615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 00:50:29.036640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.036651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 00:50:29.036659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 00:50:29.036666 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.036672 | orchestrator | 2025-09-17 00:50:29.036679 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-17 00:50:29.036686 | orchestrator | Wednesday 17 September 2025 00:47:47 +0000 (0:00:01.499) 0:03:33.187 *** 2025-09-17 00:50:29.036693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-17 00:50:29.036700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-17 00:50:29.036709 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.036720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-17 00:50:29.036728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-17 00:50:29.036736 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.036747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-17 00:50:29.036755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-17 00:50:29.036763 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.036770 | orchestrator | 2025-09-17 00:50:29.036778 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-17 00:50:29.036786 | orchestrator | Wednesday 17 September 2025 00:47:49 +0000 (0:00:01.896) 0:03:35.083 *** 2025-09-17 00:50:29.036797 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.036805 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.036813 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.036821 | orchestrator | 2025-09-17 00:50:29.036829 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-17 00:50:29.036837 | orchestrator | Wednesday 17 September 2025 00:47:50 +0000 (0:00:01.426) 0:03:36.510 *** 2025-09-17 00:50:29.036844 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.036852 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.036860 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.036867 | orchestrator | 2025-09-17 00:50:29.036875 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-17 00:50:29.036883 | orchestrator | Wednesday 17 September 2025 00:47:52 +0000 (0:00:02.039) 0:03:38.549 *** 2025-09-17 00:50:29.036891 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.036917 | orchestrator | 2025-09-17 00:50:29.036924 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-17 00:50:29.036932 | orchestrator | Wednesday 17 September 2025 00:47:54 +0000 (0:00:01.214) 0:03:39.763 *** 2025-09-17 00:50:29.036940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.036949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.036961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.036970 | orchestrator | 2025-09-17 00:50:29.036980 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-17 00:50:29.036993 | orchestrator | Wednesday 17 September 2025 00:47:57 +0000 (0:00:03.609) 0:03:43.373 *** 2025-09-17 00:50:29.037001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.037009 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.037017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.037025 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.037033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.037041 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.037048 | orchestrator | 2025-09-17 00:50:29.037056 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-17 00:50:29.037064 | orchestrator | Wednesday 17 September 2025 00:47:58 +0000 (0:00:00.554) 0:03:43.927 *** 2025-09-17 00:50:29.037071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037086 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.037095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037113 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.037123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037137 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.037143 | orchestrator | 2025-09-17 00:50:29.037150 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-17 00:50:29.037157 | orchestrator | Wednesday 17 September 2025 00:47:59 +0000 (0:00:00.731) 0:03:44.659 *** 2025-09-17 00:50:29.037163 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.037170 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.037177 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.037183 | orchestrator | 2025-09-17 00:50:29.037190 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-17 00:50:29.037197 | orchestrator | Wednesday 17 September 2025 00:48:00 +0000 (0:00:01.891) 0:03:46.551 *** 2025-09-17 00:50:29.037203 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.037210 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.037217 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.037223 | orchestrator | 2025-09-17 00:50:29.037230 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-17 00:50:29.037236 | orchestrator | Wednesday 17 September 2025 00:48:02 +0000 (0:00:01.886) 0:03:48.437 *** 2025-09-17 00:50:29.037243 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.037250 | orchestrator | 2025-09-17 00:50:29.037256 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-17 00:50:29.037263 | orchestrator | Wednesday 17 September 2025 00:48:04 +0000 (0:00:01.518) 0:03:49.956 *** 2025-09-17 00:50:29.037271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.037279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.037311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.037340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037357 | orchestrator | 2025-09-17 00:50:29.037365 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-17 00:50:29.037371 | orchestrator | Wednesday 17 September 2025 00:48:08 +0000 (0:00:04.356) 0:03:54.313 *** 2025-09-17 00:50:29.037379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.037386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037404 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.037420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.037428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037442 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.037449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.037460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.037478 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.037485 | orchestrator | 2025-09-17 00:50:29.037492 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-17 00:50:29.037498 | orchestrator | Wednesday 17 September 2025 00:48:09 +0000 (0:00:01.148) 0:03:55.462 *** 2025-09-17 00:50:29.037509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037536 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.037543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037570 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.037577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 00:50:29.037608 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.037615 | orchestrator | 2025-09-17 00:50:29.037622 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-17 00:50:29.037629 | orchestrator | Wednesday 17 September 2025 00:48:10 +0000 (0:00:00.890) 0:03:56.353 *** 2025-09-17 00:50:29.037635 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.037642 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.037649 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.037655 | orchestrator | 2025-09-17 00:50:29.037662 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-17 00:50:29.037669 | orchestrator | Wednesday 17 September 2025 00:48:12 +0000 (0:00:01.527) 0:03:57.880 *** 2025-09-17 00:50:29.037675 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.037682 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.037689 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.037695 | orchestrator | 2025-09-17 00:50:29.037702 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-17 00:50:29.037709 | orchestrator | Wednesday 17 September 2025 00:48:14 +0000 (0:00:02.139) 0:04:00.020 *** 2025-09-17 00:50:29.037715 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.037722 | orchestrator | 2025-09-17 00:50:29.037729 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-17 00:50:29.037825 | orchestrator | Wednesday 17 September 2025 00:48:15 +0000 (0:00:01.513) 0:04:01.534 *** 2025-09-17 00:50:29.037835 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-17 00:50:29.037842 | orchestrator | 2025-09-17 00:50:29.037849 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-17 00:50:29.037856 | orchestrator | Wednesday 17 September 2025 00:48:16 +0000 (0:00:00.804) 0:04:02.338 *** 2025-09-17 00:50:29.037866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-17 00:50:29.037874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-17 00:50:29.037881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-17 00:50:29.037892 | orchestrator | 2025-09-17 00:50:29.037915 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-17 00:50:29.037922 | orchestrator | Wednesday 17 September 2025 00:48:21 +0000 (0:00:04.420) 0:04:06.759 *** 2025-09-17 00:50:29.037929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 00:50:29.037936 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.037943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 00:50:29.037950 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.037956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 00:50:29.037963 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.037970 | orchestrator | 2025-09-17 00:50:29.037977 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-17 00:50:29.037983 | orchestrator | Wednesday 17 September 2025 00:48:22 +0000 (0:00:01.061) 0:04:07.820 *** 2025-09-17 00:50:29.038007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 00:50:29.038037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 00:50:29.038047 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.038057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 00:50:29.038064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 00:50:29.038071 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.038077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 00:50:29.038088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 00:50:29.038095 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.038102 | orchestrator | 2025-09-17 00:50:29.038109 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-17 00:50:29.038115 | orchestrator | Wednesday 17 September 2025 00:48:23 +0000 (0:00:01.540) 0:04:09.361 *** 2025-09-17 00:50:29.038122 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.038129 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.038135 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.038142 | orchestrator | 2025-09-17 00:50:29.038167 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-17 00:50:29.038174 | orchestrator | Wednesday 17 September 2025 00:48:26 +0000 (0:00:02.328) 0:04:11.689 *** 2025-09-17 00:50:29.038181 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.038187 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.038194 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.038200 | orchestrator | 2025-09-17 00:50:29.038207 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-17 00:50:29.038214 | orchestrator | Wednesday 17 September 2025 00:48:29 +0000 (0:00:02.907) 0:04:14.597 *** 2025-09-17 00:50:29.038221 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-17 00:50:29.038228 | orchestrator | 2025-09-17 00:50:29.038235 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-17 00:50:29.038242 | orchestrator | Wednesday 17 September 2025 00:48:30 +0000 (0:00:01.309) 0:04:15.906 *** 2025-09-17 00:50:29.038249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 00:50:29.038256 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.038263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 00:50:29.038270 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.038298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 00:50:29.038306 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.038313 | orchestrator | 2025-09-17 00:50:29.038323 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-17 00:50:29.038330 | orchestrator | Wednesday 17 September 2025 00:48:31 +0000 (0:00:01.301) 0:04:17.208 *** 2025-09-17 00:50:29.038340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 00:50:29.038347 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.038354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 00:50:29.038361 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.038368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 00:50:29.038375 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.038381 | orchestrator | 2025-09-17 00:50:29.038388 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-17 00:50:29.038395 | orchestrator | Wednesday 17 September 2025 00:48:32 +0000 (0:00:01.291) 0:04:18.499 *** 2025-09-17 00:50:29.038401 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.038408 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.038414 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.038421 | orchestrator | 2025-09-17 00:50:29.038428 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-17 00:50:29.038434 | orchestrator | Wednesday 17 September 2025 00:48:34 +0000 (0:00:01.819) 0:04:20.319 *** 2025-09-17 00:50:29.038441 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.038448 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.038455 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.038462 | orchestrator | 2025-09-17 00:50:29.038468 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-17 00:50:29.038475 | orchestrator | Wednesday 17 September 2025 00:48:37 +0000 (0:00:02.359) 0:04:22.679 *** 2025-09-17 00:50:29.038482 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.038488 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.038495 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.038501 | orchestrator | 2025-09-17 00:50:29.038508 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-17 00:50:29.038515 | orchestrator | Wednesday 17 September 2025 00:48:40 +0000 (0:00:03.208) 0:04:25.887 *** 2025-09-17 00:50:29.038522 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-17 00:50:29.038529 | orchestrator | 2025-09-17 00:50:29.038536 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-17 00:50:29.038542 | orchestrator | Wednesday 17 September 2025 00:48:41 +0000 (0:00:00.840) 0:04:26.728 *** 2025-09-17 00:50:29.038554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 00:50:29.038561 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.038583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 00:50:29.038591 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.038601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 00:50:29.038608 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.038615 | orchestrator | 2025-09-17 00:50:29.038621 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-17 00:50:29.038628 | orchestrator | Wednesday 17 September 2025 00:48:42 +0000 (0:00:01.291) 0:04:28.020 *** 2025-09-17 00:50:29.038635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 00:50:29.038642 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.038648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 00:50:29.038655 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.038662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 00:50:29.038669 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.038679 | orchestrator | 2025-09-17 00:50:29.038685 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-17 00:50:29.038692 | orchestrator | Wednesday 17 September 2025 00:48:43 +0000 (0:00:01.331) 0:04:29.352 *** 2025-09-17 00:50:29.038699 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.038705 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.038712 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.038718 | orchestrator | 2025-09-17 00:50:29.038725 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-17 00:50:29.038732 | orchestrator | Wednesday 17 September 2025 00:48:45 +0000 (0:00:01.495) 0:04:30.848 *** 2025-09-17 00:50:29.038738 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.038745 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.038752 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.038758 | orchestrator | 2025-09-17 00:50:29.038765 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-17 00:50:29.038771 | orchestrator | Wednesday 17 September 2025 00:48:47 +0000 (0:00:02.381) 0:04:33.229 *** 2025-09-17 00:50:29.038778 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.038785 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.038791 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.038798 | orchestrator | 2025-09-17 00:50:29.038805 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-17 00:50:29.038811 | orchestrator | Wednesday 17 September 2025 00:48:50 +0000 (0:00:03.154) 0:04:36.383 *** 2025-09-17 00:50:29.038818 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.038825 | orchestrator | 2025-09-17 00:50:29.038831 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-17 00:50:29.038838 | orchestrator | Wednesday 17 September 2025 00:48:52 +0000 (0:00:01.528) 0:04:37.912 *** 2025-09-17 00:50:29.038863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.038871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 00:50:29.038879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.038890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.038912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.038920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.038949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.038957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 00:50:29.038965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 00:50:29.038976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.038984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.038991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.039013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.039024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.039031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.039038 | orchestrator | 2025-09-17 00:50:29.039045 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-17 00:50:29.039056 | orchestrator | Wednesday 17 September 2025 00:48:55 +0000 (0:00:03.268) 0:04:41.181 *** 2025-09-17 00:50:29.039064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.039071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 00:50:29.039078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.039104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.039112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.039119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 00:50:29.039130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.039137 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.039144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.039151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.039158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.039180 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.039192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.039199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 00:50:29.039211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.039218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 00:50:29.039225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:50:29.039231 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.039238 | orchestrator | 2025-09-17 00:50:29.039245 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-17 00:50:29.039252 | orchestrator | Wednesday 17 September 2025 00:48:56 +0000 (0:00:00.764) 0:04:41.945 *** 2025-09-17 00:50:29.039258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 00:50:29.039265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 00:50:29.039272 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.039295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 00:50:29.039302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 00:50:29.039309 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.039320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 00:50:29.039327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 00:50:29.039339 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.039346 | orchestrator | 2025-09-17 00:50:29.039353 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-17 00:50:29.039359 | orchestrator | Wednesday 17 September 2025 00:48:57 +0000 (0:00:01.444) 0:04:43.390 *** 2025-09-17 00:50:29.039366 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.039372 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.039379 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.039385 | orchestrator | 2025-09-17 00:50:29.039392 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-17 00:50:29.039398 | orchestrator | Wednesday 17 September 2025 00:48:59 +0000 (0:00:01.429) 0:04:44.820 *** 2025-09-17 00:50:29.039405 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.039411 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.039418 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.039425 | orchestrator | 2025-09-17 00:50:29.039431 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-17 00:50:29.039438 | orchestrator | Wednesday 17 September 2025 00:49:01 +0000 (0:00:02.073) 0:04:46.893 *** 2025-09-17 00:50:29.039445 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.039451 | orchestrator | 2025-09-17 00:50:29.039458 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-17 00:50:29.039464 | orchestrator | Wednesday 17 September 2025 00:49:02 +0000 (0:00:01.324) 0:04:48.218 *** 2025-09-17 00:50:29.039471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:50:29.039479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:50:29.039501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:50:29.039518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:50:29.039527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:50:29.039535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:50:29.039542 | orchestrator | 2025-09-17 00:50:29.039549 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-17 00:50:29.039556 | orchestrator | Wednesday 17 September 2025 00:49:07 +0000 (0:00:05.278) 0:04:53.497 *** 2025-09-17 00:50:29.039578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 00:50:29.039595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 00:50:29.039602 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.039609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 00:50:29.039617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 00:50:29.039624 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.039645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 00:50:29.039664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 00:50:29.039671 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.039678 | orchestrator | 2025-09-17 00:50:29.039685 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-17 00:50:29.039691 | orchestrator | Wednesday 17 September 2025 00:49:08 +0000 (0:00:00.695) 0:04:54.192 *** 2025-09-17 00:50:29.039698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-17 00:50:29.039705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 00:50:29.039712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 00:50:29.039719 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.039726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-17 00:50:29.039732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 00:50:29.039739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 00:50:29.039746 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.039753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-17 00:50:29.039759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 00:50:29.039766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 00:50:29.039777 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.039784 | orchestrator | 2025-09-17 00:50:29.039791 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-17 00:50:29.039797 | orchestrator | Wednesday 17 September 2025 00:49:09 +0000 (0:00:00.939) 0:04:55.131 *** 2025-09-17 00:50:29.039804 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.039811 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.039817 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.039824 | orchestrator | 2025-09-17 00:50:29.039830 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-17 00:50:29.039837 | orchestrator | Wednesday 17 September 2025 00:49:10 +0000 (0:00:00.770) 0:04:55.902 *** 2025-09-17 00:50:29.039844 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.039850 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.039857 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.039863 | orchestrator | 2025-09-17 00:50:29.039885 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-17 00:50:29.039892 | orchestrator | Wednesday 17 September 2025 00:49:11 +0000 (0:00:01.277) 0:04:57.180 *** 2025-09-17 00:50:29.039942 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.039949 | orchestrator | 2025-09-17 00:50:29.039955 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-17 00:50:29.039962 | orchestrator | Wednesday 17 September 2025 00:49:12 +0000 (0:00:01.406) 0:04:58.586 *** 2025-09-17 00:50:29.039973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 00:50:29.039981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 00:50:29.039988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 00:50:29.039996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 00:50:29.040015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 00:50:29.040085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 00:50:29.040092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 00:50:29.040128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 00:50:29.040139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 00:50:29.040176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 00:50:29.040183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 00:50:29.040194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 00:50:29.040203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040252 | orchestrator | 2025-09-17 00:50:29.040259 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-17 00:50:29.040265 | orchestrator | Wednesday 17 September 2025 00:49:17 +0000 (0:00:04.337) 0:05:02.924 *** 2025-09-17 00:50:29.040271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-17 00:50:29.040278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 00:50:29.040288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-17 00:50:29.040324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 00:50:29.040330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040357 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.040364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-17 00:50:29.040376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 00:50:29.040383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-17 00:50:29.040412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 00:50:29.040423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-17 00:50:29.040469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 00:50:29.040488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040508 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.040515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-17 00:50:29.040539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 00:50:29.040546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 00:50:29.040565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 00:50:29.040577 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.040584 | orchestrator | 2025-09-17 00:50:29.040594 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-17 00:50:29.040600 | orchestrator | Wednesday 17 September 2025 00:49:18 +0000 (0:00:01.162) 0:05:04.087 *** 2025-09-17 00:50:29.040607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-17 00:50:29.040614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-17 00:50:29.040620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 00:50:29.040627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 00:50:29.040634 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.040641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-17 00:50:29.040647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-17 00:50:29.040654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 00:50:29.040660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 00:50:29.040667 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.040673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-17 00:50:29.040679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-17 00:50:29.040686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 00:50:29.040695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 00:50:29.040702 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.040708 | orchestrator | 2025-09-17 00:50:29.040714 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-17 00:50:29.040725 | orchestrator | Wednesday 17 September 2025 00:49:19 +0000 (0:00:00.981) 0:05:05.069 *** 2025-09-17 00:50:29.040732 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.040738 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.040744 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.040750 | orchestrator | 2025-09-17 00:50:29.040760 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-17 00:50:29.040767 | orchestrator | Wednesday 17 September 2025 00:49:19 +0000 (0:00:00.442) 0:05:05.511 *** 2025-09-17 00:50:29.040773 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.040779 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.040785 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.040791 | orchestrator | 2025-09-17 00:50:29.040798 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-17 00:50:29.040804 | orchestrator | Wednesday 17 September 2025 00:49:21 +0000 (0:00:01.393) 0:05:06.904 *** 2025-09-17 00:50:29.040810 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.040816 | orchestrator | 2025-09-17 00:50:29.040822 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-17 00:50:29.040828 | orchestrator | Wednesday 17 September 2025 00:49:23 +0000 (0:00:01.716) 0:05:08.621 *** 2025-09-17 00:50:29.040835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:50:29.040842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:50:29.040849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 00:50:29.040860 | orchestrator | 2025-09-17 00:50:29.040869 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-17 00:50:29.040876 | orchestrator | Wednesday 17 September 2025 00:49:25 +0000 (0:00:02.525) 0:05:11.146 *** 2025-09-17 00:50:29.040886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-17 00:50:29.040893 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.040912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-17 00:50:29.040919 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.040926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-17 00:50:29.040932 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.040939 | orchestrator | 2025-09-17 00:50:29.040945 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-17 00:50:29.040951 | orchestrator | Wednesday 17 September 2025 00:49:25 +0000 (0:00:00.405) 0:05:11.551 *** 2025-09-17 00:50:29.040958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-17 00:50:29.040969 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.040975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-17 00:50:29.040981 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.040988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-17 00:50:29.040994 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041000 | orchestrator | 2025-09-17 00:50:29.041007 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-17 00:50:29.041013 | orchestrator | Wednesday 17 September 2025 00:49:26 +0000 (0:00:00.971) 0:05:12.523 *** 2025-09-17 00:50:29.041022 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.041029 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.041035 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041041 | orchestrator | 2025-09-17 00:50:29.041047 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-17 00:50:29.041053 | orchestrator | Wednesday 17 September 2025 00:49:27 +0000 (0:00:00.445) 0:05:12.968 *** 2025-09-17 00:50:29.041060 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.041066 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.041072 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041078 | orchestrator | 2025-09-17 00:50:29.041084 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-17 00:50:29.041095 | orchestrator | Wednesday 17 September 2025 00:49:28 +0000 (0:00:01.287) 0:05:14.256 *** 2025-09-17 00:50:29.041101 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:50:29.041107 | orchestrator | 2025-09-17 00:50:29.041113 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-17 00:50:29.041120 | orchestrator | Wednesday 17 September 2025 00:49:30 +0000 (0:00:01.708) 0:05:15.964 *** 2025-09-17 00:50:29.041126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.041133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.041144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.041154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.041165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.041172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-17 00:50:29.041179 | orchestrator | 2025-09-17 00:50:29.041185 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-17 00:50:29.041192 | orchestrator | Wednesday 17 September 2025 00:49:36 +0000 (0:00:06.136) 0:05:22.101 *** 2025-09-17 00:50:29.041202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.041212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.041219 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.041230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.041237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.041244 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.041250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.041261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-17 00:50:29.041268 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041274 | orchestrator | 2025-09-17 00:50:29.041280 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-17 00:50:29.041289 | orchestrator | Wednesday 17 September 2025 00:49:37 +0000 (0:00:00.639) 0:05:22.741 *** 2025-09-17 00:50:29.041296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041325 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.041331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041361 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.041367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 00:50:29.041393 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041399 | orchestrator | 2025-09-17 00:50:29.041405 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-17 00:50:29.041412 | orchestrator | Wednesday 17 September 2025 00:49:38 +0000 (0:00:01.761) 0:05:24.502 *** 2025-09-17 00:50:29.041418 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.041424 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.041430 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.041437 | orchestrator | 2025-09-17 00:50:29.041443 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-17 00:50:29.041449 | orchestrator | Wednesday 17 September 2025 00:49:40 +0000 (0:00:01.410) 0:05:25.912 *** 2025-09-17 00:50:29.041456 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.041462 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.041468 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.041474 | orchestrator | 2025-09-17 00:50:29.041480 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-17 00:50:29.041487 | orchestrator | Wednesday 17 September 2025 00:49:42 +0000 (0:00:02.174) 0:05:28.086 *** 2025-09-17 00:50:29.041493 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.041499 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.041505 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041511 | orchestrator | 2025-09-17 00:50:29.041518 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-17 00:50:29.041524 | orchestrator | Wednesday 17 September 2025 00:49:42 +0000 (0:00:00.330) 0:05:28.417 *** 2025-09-17 00:50:29.041530 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.041536 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.041542 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041549 | orchestrator | 2025-09-17 00:50:29.041555 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-17 00:50:29.041564 | orchestrator | Wednesday 17 September 2025 00:49:43 +0000 (0:00:00.300) 0:05:28.718 *** 2025-09-17 00:50:29.041570 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.041577 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.041583 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041589 | orchestrator | 2025-09-17 00:50:29.041595 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-17 00:50:29.041601 | orchestrator | Wednesday 17 September 2025 00:49:43 +0000 (0:00:00.552) 0:05:29.270 *** 2025-09-17 00:50:29.041607 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.041613 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.041619 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041626 | orchestrator | 2025-09-17 00:50:29.041635 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-17 00:50:29.041646 | orchestrator | Wednesday 17 September 2025 00:49:43 +0000 (0:00:00.315) 0:05:29.586 *** 2025-09-17 00:50:29.041652 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.041658 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.041664 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041670 | orchestrator | 2025-09-17 00:50:29.041676 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-17 00:50:29.041683 | orchestrator | Wednesday 17 September 2025 00:49:44 +0000 (0:00:00.302) 0:05:29.889 *** 2025-09-17 00:50:29.041689 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.041695 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.041701 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.041707 | orchestrator | 2025-09-17 00:50:29.041713 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-17 00:50:29.041720 | orchestrator | Wednesday 17 September 2025 00:49:45 +0000 (0:00:00.813) 0:05:30.703 *** 2025-09-17 00:50:29.041726 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.041732 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.041738 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.041744 | orchestrator | 2025-09-17 00:50:29.041751 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-17 00:50:29.041757 | orchestrator | Wednesday 17 September 2025 00:49:45 +0000 (0:00:00.696) 0:05:31.399 *** 2025-09-17 00:50:29.041763 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.041769 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.041775 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.041782 | orchestrator | 2025-09-17 00:50:29.041788 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-17 00:50:29.041794 | orchestrator | Wednesday 17 September 2025 00:49:46 +0000 (0:00:00.341) 0:05:31.740 *** 2025-09-17 00:50:29.041800 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.041807 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.041813 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.041819 | orchestrator | 2025-09-17 00:50:29.041825 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-17 00:50:29.041831 | orchestrator | Wednesday 17 September 2025 00:49:47 +0000 (0:00:00.936) 0:05:32.677 *** 2025-09-17 00:50:29.041838 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.041844 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.041850 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.041856 | orchestrator | 2025-09-17 00:50:29.041862 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-17 00:50:29.041869 | orchestrator | Wednesday 17 September 2025 00:49:48 +0000 (0:00:01.211) 0:05:33.888 *** 2025-09-17 00:50:29.041875 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.041881 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.041887 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.041893 | orchestrator | 2025-09-17 00:50:29.041913 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-17 00:50:29.041920 | orchestrator | Wednesday 17 September 2025 00:49:49 +0000 (0:00:00.906) 0:05:34.795 *** 2025-09-17 00:50:29.041926 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.041932 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.041938 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.041945 | orchestrator | 2025-09-17 00:50:29.041951 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-17 00:50:29.041957 | orchestrator | Wednesday 17 September 2025 00:49:58 +0000 (0:00:09.515) 0:05:44.310 *** 2025-09-17 00:50:29.041963 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.041969 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.041976 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.041982 | orchestrator | 2025-09-17 00:50:29.041988 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-17 00:50:29.041994 | orchestrator | Wednesday 17 September 2025 00:49:59 +0000 (0:00:00.806) 0:05:45.117 *** 2025-09-17 00:50:29.042001 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.042011 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.042041 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.042049 | orchestrator | 2025-09-17 00:50:29.042055 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-17 00:50:29.042062 | orchestrator | Wednesday 17 September 2025 00:50:12 +0000 (0:00:12.896) 0:05:58.013 *** 2025-09-17 00:50:29.042068 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.042074 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.042080 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.042086 | orchestrator | 2025-09-17 00:50:29.042092 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-17 00:50:29.042098 | orchestrator | Wednesday 17 September 2025 00:50:13 +0000 (0:00:01.101) 0:05:59.115 *** 2025-09-17 00:50:29.042105 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:50:29.042111 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:50:29.042117 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:50:29.042123 | orchestrator | 2025-09-17 00:50:29.042129 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-17 00:50:29.042135 | orchestrator | Wednesday 17 September 2025 00:50:22 +0000 (0:00:09.197) 0:06:08.312 *** 2025-09-17 00:50:29.042142 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.042148 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.042154 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.042160 | orchestrator | 2025-09-17 00:50:29.042166 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-17 00:50:29.042172 | orchestrator | Wednesday 17 September 2025 00:50:23 +0000 (0:00:00.339) 0:06:08.652 *** 2025-09-17 00:50:29.042179 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.042188 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.042194 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.042201 | orchestrator | 2025-09-17 00:50:29.042207 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-17 00:50:29.042213 | orchestrator | Wednesday 17 September 2025 00:50:23 +0000 (0:00:00.348) 0:06:09.000 *** 2025-09-17 00:50:29.042219 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.042225 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.042231 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.042237 | orchestrator | 2025-09-17 00:50:29.042244 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-17 00:50:29.042253 | orchestrator | Wednesday 17 September 2025 00:50:24 +0000 (0:00:00.654) 0:06:09.655 *** 2025-09-17 00:50:29.042260 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.042266 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.042272 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.042278 | orchestrator | 2025-09-17 00:50:29.042284 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-17 00:50:29.042290 | orchestrator | Wednesday 17 September 2025 00:50:24 +0000 (0:00:00.346) 0:06:10.001 *** 2025-09-17 00:50:29.042297 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.042303 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.042309 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.042315 | orchestrator | 2025-09-17 00:50:29.042321 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-17 00:50:29.042327 | orchestrator | Wednesday 17 September 2025 00:50:24 +0000 (0:00:00.351) 0:06:10.353 *** 2025-09-17 00:50:29.042333 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:50:29.042339 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:50:29.042345 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:50:29.042351 | orchestrator | 2025-09-17 00:50:29.042357 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-17 00:50:29.042364 | orchestrator | Wednesday 17 September 2025 00:50:25 +0000 (0:00:00.345) 0:06:10.698 *** 2025-09-17 00:50:29.042370 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.042376 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.042387 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.042393 | orchestrator | 2025-09-17 00:50:29.042399 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-17 00:50:29.042405 | orchestrator | Wednesday 17 September 2025 00:50:26 +0000 (0:00:01.217) 0:06:11.916 *** 2025-09-17 00:50:29.042412 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:50:29.042418 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:50:29.042424 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:50:29.042430 | orchestrator | 2025-09-17 00:50:29.042436 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:50:29.042442 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-17 00:50:29.042449 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-17 00:50:29.042455 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-17 00:50:29.042462 | orchestrator | 2025-09-17 00:50:29.042468 | orchestrator | 2025-09-17 00:50:29.042474 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:50:29.042480 | orchestrator | Wednesday 17 September 2025 00:50:27 +0000 (0:00:00.861) 0:06:12.777 *** 2025-09-17 00:50:29.042487 | orchestrator | =============================================================================== 2025-09-17 00:50:29.042493 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.90s 2025-09-17 00:50:29.042499 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.52s 2025-09-17 00:50:29.042505 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.20s 2025-09-17 00:50:29.042511 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.14s 2025-09-17 00:50:29.042517 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.28s 2025-09-17 00:50:29.042523 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.80s 2025-09-17 00:50:29.042530 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.42s 2025-09-17 00:50:29.042536 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.36s 2025-09-17 00:50:29.042542 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.35s 2025-09-17 00:50:29.042548 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.34s 2025-09-17 00:50:29.042554 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.31s 2025-09-17 00:50:29.042560 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.29s 2025-09-17 00:50:29.042566 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.28s 2025-09-17 00:50:29.042572 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.15s 2025-09-17 00:50:29.042578 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.12s 2025-09-17 00:50:29.042584 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.02s 2025-09-17 00:50:29.042591 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.02s 2025-09-17 00:50:29.042597 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.90s 2025-09-17 00:50:29.042603 | orchestrator | loadbalancer : Check loadbalancer containers ---------------------------- 3.82s 2025-09-17 00:50:29.042609 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.81s 2025-09-17 00:50:32.067965 | orchestrator | 2025-09-17 00:50:32 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:32.068855 | orchestrator | 2025-09-17 00:50:32 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:32.068961 | orchestrator | 2025-09-17 00:50:32 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:32.068987 | orchestrator | 2025-09-17 00:50:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:35.122432 | orchestrator | 2025-09-17 00:50:35 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:35.122548 | orchestrator | 2025-09-17 00:50:35 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:35.123408 | orchestrator | 2025-09-17 00:50:35 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:35.123431 | orchestrator | 2025-09-17 00:50:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:38.155238 | orchestrator | 2025-09-17 00:50:38 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:38.155928 | orchestrator | 2025-09-17 00:50:38 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:38.157069 | orchestrator | 2025-09-17 00:50:38 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:38.157093 | orchestrator | 2025-09-17 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:41.185500 | orchestrator | 2025-09-17 00:50:41 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:41.185597 | orchestrator | 2025-09-17 00:50:41 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:41.185610 | orchestrator | 2025-09-17 00:50:41 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:41.185619 | orchestrator | 2025-09-17 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:44.211037 | orchestrator | 2025-09-17 00:50:44 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:44.211423 | orchestrator | 2025-09-17 00:50:44 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:44.212244 | orchestrator | 2025-09-17 00:50:44 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:44.212269 | orchestrator | 2025-09-17 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:47.242726 | orchestrator | 2025-09-17 00:50:47 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:47.242969 | orchestrator | 2025-09-17 00:50:47 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:47.243407 | orchestrator | 2025-09-17 00:50:47 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:47.243571 | orchestrator | 2025-09-17 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:50.273099 | orchestrator | 2025-09-17 00:50:50 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:50.273206 | orchestrator | 2025-09-17 00:50:50 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:50.273787 | orchestrator | 2025-09-17 00:50:50 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:50.273925 | orchestrator | 2025-09-17 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:53.301726 | orchestrator | 2025-09-17 00:50:53 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:53.302000 | orchestrator | 2025-09-17 00:50:53 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:53.302755 | orchestrator | 2025-09-17 00:50:53 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:53.302807 | orchestrator | 2025-09-17 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:56.336355 | orchestrator | 2025-09-17 00:50:56 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:56.338292 | orchestrator | 2025-09-17 00:50:56 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:56.343795 | orchestrator | 2025-09-17 00:50:56 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:56.344030 | orchestrator | 2025-09-17 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:50:59.375938 | orchestrator | 2025-09-17 00:50:59 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:50:59.378211 | orchestrator | 2025-09-17 00:50:59 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:50:59.381310 | orchestrator | 2025-09-17 00:50:59 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:50:59.381375 | orchestrator | 2025-09-17 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:02.419680 | orchestrator | 2025-09-17 00:51:02 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:02.420710 | orchestrator | 2025-09-17 00:51:02 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:02.422098 | orchestrator | 2025-09-17 00:51:02 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:02.422130 | orchestrator | 2025-09-17 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:05.457030 | orchestrator | 2025-09-17 00:51:05 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:05.457758 | orchestrator | 2025-09-17 00:51:05 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:05.458945 | orchestrator | 2025-09-17 00:51:05 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:05.458972 | orchestrator | 2025-09-17 00:51:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:08.502324 | orchestrator | 2025-09-17 00:51:08 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:08.502674 | orchestrator | 2025-09-17 00:51:08 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:08.503724 | orchestrator | 2025-09-17 00:51:08 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:08.503748 | orchestrator | 2025-09-17 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:11.542696 | orchestrator | 2025-09-17 00:51:11 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:11.545489 | orchestrator | 2025-09-17 00:51:11 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:11.547285 | orchestrator | 2025-09-17 00:51:11 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:11.547518 | orchestrator | 2025-09-17 00:51:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:14.588852 | orchestrator | 2025-09-17 00:51:14 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:14.592944 | orchestrator | 2025-09-17 00:51:14 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:14.598183 | orchestrator | 2025-09-17 00:51:14 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:14.598204 | orchestrator | 2025-09-17 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:17.623316 | orchestrator | 2025-09-17 00:51:17 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:17.625794 | orchestrator | 2025-09-17 00:51:17 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:17.628974 | orchestrator | 2025-09-17 00:51:17 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:17.629020 | orchestrator | 2025-09-17 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:20.677343 | orchestrator | 2025-09-17 00:51:20 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:20.678783 | orchestrator | 2025-09-17 00:51:20 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:20.680833 | orchestrator | 2025-09-17 00:51:20 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:20.680996 | orchestrator | 2025-09-17 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:23.727236 | orchestrator | 2025-09-17 00:51:23 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:23.728746 | orchestrator | 2025-09-17 00:51:23 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:23.731015 | orchestrator | 2025-09-17 00:51:23 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:23.731219 | orchestrator | 2025-09-17 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:26.771870 | orchestrator | 2025-09-17 00:51:26 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:26.773461 | orchestrator | 2025-09-17 00:51:26 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:26.774896 | orchestrator | 2025-09-17 00:51:26 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:26.775174 | orchestrator | 2025-09-17 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:29.823800 | orchestrator | 2025-09-17 00:51:29 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:29.825667 | orchestrator | 2025-09-17 00:51:29 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:29.827222 | orchestrator | 2025-09-17 00:51:29 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:29.827451 | orchestrator | 2025-09-17 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:32.875333 | orchestrator | 2025-09-17 00:51:32 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:32.877684 | orchestrator | 2025-09-17 00:51:32 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:32.880067 | orchestrator | 2025-09-17 00:51:32 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:32.880204 | orchestrator | 2025-09-17 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:35.926207 | orchestrator | 2025-09-17 00:51:35 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:35.929108 | orchestrator | 2025-09-17 00:51:35 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:35.935675 | orchestrator | 2025-09-17 00:51:35 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:35.935712 | orchestrator | 2025-09-17 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:38.980056 | orchestrator | 2025-09-17 00:51:38 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:38.982113 | orchestrator | 2025-09-17 00:51:38 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:38.983621 | orchestrator | 2025-09-17 00:51:38 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:38.983821 | orchestrator | 2025-09-17 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:42.022474 | orchestrator | 2025-09-17 00:51:42 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:42.025153 | orchestrator | 2025-09-17 00:51:42 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:42.026340 | orchestrator | 2025-09-17 00:51:42 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:42.026366 | orchestrator | 2025-09-17 00:51:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:45.063885 | orchestrator | 2025-09-17 00:51:45 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:45.064047 | orchestrator | 2025-09-17 00:51:45 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:45.065575 | orchestrator | 2025-09-17 00:51:45 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:45.065691 | orchestrator | 2025-09-17 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:48.095610 | orchestrator | 2025-09-17 00:51:48 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:48.095710 | orchestrator | 2025-09-17 00:51:48 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:48.096440 | orchestrator | 2025-09-17 00:51:48 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:48.096463 | orchestrator | 2025-09-17 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:51.127279 | orchestrator | 2025-09-17 00:51:51 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:51.130006 | orchestrator | 2025-09-17 00:51:51 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:51.130437 | orchestrator | 2025-09-17 00:51:51 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:51.130678 | orchestrator | 2025-09-17 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:54.174304 | orchestrator | 2025-09-17 00:51:54 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:54.175352 | orchestrator | 2025-09-17 00:51:54 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:54.177038 | orchestrator | 2025-09-17 00:51:54 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:54.177088 | orchestrator | 2025-09-17 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:51:57.212074 | orchestrator | 2025-09-17 00:51:57 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:51:57.212885 | orchestrator | 2025-09-17 00:51:57 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:51:57.215344 | orchestrator | 2025-09-17 00:51:57 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:51:57.215871 | orchestrator | 2025-09-17 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:00.255792 | orchestrator | 2025-09-17 00:52:00 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:00.257735 | orchestrator | 2025-09-17 00:52:00 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:00.259592 | orchestrator | 2025-09-17 00:52:00 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:00.259767 | orchestrator | 2025-09-17 00:52:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:03.300364 | orchestrator | 2025-09-17 00:52:03 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:03.301949 | orchestrator | 2025-09-17 00:52:03 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:03.303440 | orchestrator | 2025-09-17 00:52:03 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:03.303466 | orchestrator | 2025-09-17 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:06.350561 | orchestrator | 2025-09-17 00:52:06 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:06.351253 | orchestrator | 2025-09-17 00:52:06 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:06.353190 | orchestrator | 2025-09-17 00:52:06 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:06.353220 | orchestrator | 2025-09-17 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:09.408548 | orchestrator | 2025-09-17 00:52:09 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:09.410268 | orchestrator | 2025-09-17 00:52:09 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:09.412560 | orchestrator | 2025-09-17 00:52:09 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:09.412669 | orchestrator | 2025-09-17 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:12.456681 | orchestrator | 2025-09-17 00:52:12 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:12.460033 | orchestrator | 2025-09-17 00:52:12 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:12.462228 | orchestrator | 2025-09-17 00:52:12 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:12.462305 | orchestrator | 2025-09-17 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:15.507152 | orchestrator | 2025-09-17 00:52:15 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:15.508474 | orchestrator | 2025-09-17 00:52:15 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:15.510354 | orchestrator | 2025-09-17 00:52:15 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:15.510393 | orchestrator | 2025-09-17 00:52:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:18.558146 | orchestrator | 2025-09-17 00:52:18 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:18.559309 | orchestrator | 2025-09-17 00:52:18 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:18.560828 | orchestrator | 2025-09-17 00:52:18 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:18.560853 | orchestrator | 2025-09-17 00:52:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:21.616631 | orchestrator | 2025-09-17 00:52:21 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:21.616736 | orchestrator | 2025-09-17 00:52:21 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:21.619235 | orchestrator | 2025-09-17 00:52:21 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:21.619291 | orchestrator | 2025-09-17 00:52:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:24.685731 | orchestrator | 2025-09-17 00:52:24 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:24.687477 | orchestrator | 2025-09-17 00:52:24 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:24.689387 | orchestrator | 2025-09-17 00:52:24 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:24.689840 | orchestrator | 2025-09-17 00:52:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:27.737407 | orchestrator | 2025-09-17 00:52:27 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:27.739597 | orchestrator | 2025-09-17 00:52:27 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:27.741740 | orchestrator | 2025-09-17 00:52:27 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:27.741778 | orchestrator | 2025-09-17 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:30.780597 | orchestrator | 2025-09-17 00:52:30 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:30.780705 | orchestrator | 2025-09-17 00:52:30 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:30.782198 | orchestrator | 2025-09-17 00:52:30 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:30.782221 | orchestrator | 2025-09-17 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:33.824949 | orchestrator | 2025-09-17 00:52:33 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:33.826575 | orchestrator | 2025-09-17 00:52:33 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:33.829320 | orchestrator | 2025-09-17 00:52:33 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:33.829347 | orchestrator | 2025-09-17 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:36.867067 | orchestrator | 2025-09-17 00:52:36 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:36.868204 | orchestrator | 2025-09-17 00:52:36 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:36.870989 | orchestrator | 2025-09-17 00:52:36 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:36.871022 | orchestrator | 2025-09-17 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:39.918602 | orchestrator | 2025-09-17 00:52:39 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:39.920125 | orchestrator | 2025-09-17 00:52:39 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state STARTED 2025-09-17 00:52:39.921570 | orchestrator | 2025-09-17 00:52:39 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:39.921602 | orchestrator | 2025-09-17 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:42.966533 | orchestrator | 2025-09-17 00:52:42 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:42.972246 | orchestrator | 2025-09-17 00:52:42 | INFO  | Task abcdd064-e562-478a-b95d-8a452d82ff08 is in state SUCCESS 2025-09-17 00:52:42.975421 | orchestrator | 2025-09-17 00:52:42.975455 | orchestrator | 2025-09-17 00:52:42.975469 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-17 00:52:42.975481 | orchestrator | 2025-09-17 00:52:42.975492 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-17 00:52:42.975503 | orchestrator | Wednesday 17 September 2025 00:41:46 +0000 (0:00:00.692) 0:00:00.692 *** 2025-09-17 00:52:42.975516 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.975528 | orchestrator | 2025-09-17 00:52:42.975539 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-17 00:52:42.975550 | orchestrator | Wednesday 17 September 2025 00:41:47 +0000 (0:00:01.041) 0:00:01.734 *** 2025-09-17 00:52:42.975560 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.975572 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.975583 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.975594 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.975604 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.975615 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.975625 | orchestrator | 2025-09-17 00:52:42.975636 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-17 00:52:42.975647 | orchestrator | Wednesday 17 September 2025 00:41:49 +0000 (0:00:01.543) 0:00:03.278 *** 2025-09-17 00:52:42.975657 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.975756 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.975768 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.975779 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.975790 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.975846 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.975860 | orchestrator | 2025-09-17 00:52:42.975871 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-17 00:52:42.975882 | orchestrator | Wednesday 17 September 2025 00:41:50 +0000 (0:00:00.720) 0:00:03.998 *** 2025-09-17 00:52:42.975892 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.975926 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.975937 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.975947 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.975958 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.975969 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.975979 | orchestrator | 2025-09-17 00:52:42.975990 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-17 00:52:42.976001 | orchestrator | Wednesday 17 September 2025 00:41:51 +0000 (0:00:00.950) 0:00:04.949 *** 2025-09-17 00:52:42.976014 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.976026 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.976038 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.976050 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.976062 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.976130 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.976155 | orchestrator | 2025-09-17 00:52:42.976168 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-17 00:52:42.976180 | orchestrator | Wednesday 17 September 2025 00:41:51 +0000 (0:00:00.733) 0:00:05.683 *** 2025-09-17 00:52:42.976193 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.976204 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.976216 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.976228 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.976240 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.976252 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.976263 | orchestrator | 2025-09-17 00:52:42.976276 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-17 00:52:42.976288 | orchestrator | Wednesday 17 September 2025 00:41:52 +0000 (0:00:00.727) 0:00:06.410 *** 2025-09-17 00:52:42.976302 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.976314 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.976340 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.976352 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.976481 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.976492 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.976502 | orchestrator | 2025-09-17 00:52:42.976513 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-17 00:52:42.976524 | orchestrator | Wednesday 17 September 2025 00:41:53 +0000 (0:00:01.440) 0:00:07.850 *** 2025-09-17 00:52:42.976535 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.976547 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.976557 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.976568 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.976578 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.976589 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.976599 | orchestrator | 2025-09-17 00:52:42.976610 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-17 00:52:42.976621 | orchestrator | Wednesday 17 September 2025 00:41:54 +0000 (0:00:00.861) 0:00:08.712 *** 2025-09-17 00:52:42.976632 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.976642 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.976653 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.976664 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.976674 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.976685 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.976696 | orchestrator | 2025-09-17 00:52:42.976706 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-17 00:52:42.976717 | orchestrator | Wednesday 17 September 2025 00:41:55 +0000 (0:00:00.654) 0:00:09.367 *** 2025-09-17 00:52:42.976728 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 00:52:42.976739 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 00:52:42.976749 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 00:52:42.976760 | orchestrator | 2025-09-17 00:52:42.976771 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-17 00:52:42.976781 | orchestrator | Wednesday 17 September 2025 00:41:56 +0000 (0:00:00.581) 0:00:09.948 *** 2025-09-17 00:52:42.976792 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.976803 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.976813 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.976824 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.976834 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.976845 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.976855 | orchestrator | 2025-09-17 00:52:42.976878 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-17 00:52:42.976890 | orchestrator | Wednesday 17 September 2025 00:41:57 +0000 (0:00:01.479) 0:00:11.428 *** 2025-09-17 00:52:42.976901 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 00:52:42.976932 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 00:52:42.976943 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 00:52:42.976954 | orchestrator | 2025-09-17 00:52:42.976964 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-17 00:52:42.976975 | orchestrator | Wednesday 17 September 2025 00:42:00 +0000 (0:00:02.977) 0:00:14.405 *** 2025-09-17 00:52:42.976986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-17 00:52:42.976997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-17 00:52:42.977007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-17 00:52:42.977018 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.977064 | orchestrator | 2025-09-17 00:52:42.977077 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-17 00:52:42.977088 | orchestrator | Wednesday 17 September 2025 00:42:01 +0000 (0:00:00.592) 0:00:14.998 *** 2025-09-17 00:52:42.977183 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.977200 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.977211 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.977222 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.977233 | orchestrator | 2025-09-17 00:52:42.977244 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-17 00:52:42.977254 | orchestrator | Wednesday 17 September 2025 00:42:02 +0000 (0:00:00.886) 0:00:15.884 *** 2025-09-17 00:52:42.977268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.977365 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.977379 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.977390 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.977401 | orchestrator | 2025-09-17 00:52:42.977412 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-17 00:52:42.977422 | orchestrator | Wednesday 17 September 2025 00:42:02 +0000 (0:00:00.214) 0:00:16.099 *** 2025-09-17 00:52:42.977444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-17 00:41:58.262302', 'end': '2025-09-17 00:41:58.555829', 'delta': '0:00:00.293527', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.977459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-17 00:41:59.208750', 'end': '2025-09-17 00:41:59.532126', 'delta': '0:00:00.323376', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.977483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-17 00:42:00.053614', 'end': '2025-09-17 00:42:00.364504', 'delta': '0:00:00.310890', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.977495 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.977506 | orchestrator | 2025-09-17 00:52:42.977517 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-17 00:52:42.977528 | orchestrator | Wednesday 17 September 2025 00:42:02 +0000 (0:00:00.376) 0:00:16.475 *** 2025-09-17 00:52:42.977538 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.977549 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.977560 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.977570 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.977581 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.977592 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.977602 | orchestrator | 2025-09-17 00:52:42.977613 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-17 00:52:42.977624 | orchestrator | Wednesday 17 September 2025 00:42:04 +0000 (0:00:01.943) 0:00:18.419 *** 2025-09-17 00:52:42.977635 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 00:52:42.977646 | orchestrator | 2025-09-17 00:52:42.977657 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-17 00:52:42.977667 | orchestrator | Wednesday 17 September 2025 00:42:05 +0000 (0:00:00.844) 0:00:19.263 *** 2025-09-17 00:52:42.977678 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.977716 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.977728 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.977738 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.977749 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.977760 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.977782 | orchestrator | 2025-09-17 00:52:42.977793 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-17 00:52:42.977804 | orchestrator | Wednesday 17 September 2025 00:42:06 +0000 (0:00:01.445) 0:00:20.709 *** 2025-09-17 00:52:42.977815 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.977825 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.977836 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.977846 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.977857 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.977947 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.977961 | orchestrator | 2025-09-17 00:52:42.977973 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-17 00:52:42.977983 | orchestrator | Wednesday 17 September 2025 00:42:07 +0000 (0:00:01.090) 0:00:21.800 *** 2025-09-17 00:52:42.977994 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.978004 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.978062 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.978077 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.978088 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.978099 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.978118 | orchestrator | 2025-09-17 00:52:42.978129 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-17 00:52:42.978140 | orchestrator | Wednesday 17 September 2025 00:42:08 +0000 (0:00:00.865) 0:00:22.665 *** 2025-09-17 00:52:42.978151 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.978161 | orchestrator | 2025-09-17 00:52:42.978172 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-17 00:52:42.978183 | orchestrator | Wednesday 17 September 2025 00:42:09 +0000 (0:00:00.278) 0:00:22.943 *** 2025-09-17 00:52:42.978194 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.978204 | orchestrator | 2025-09-17 00:52:42.978246 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-17 00:52:42.978259 | orchestrator | Wednesday 17 September 2025 00:42:09 +0000 (0:00:00.298) 0:00:23.242 *** 2025-09-17 00:52:42.978270 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.978281 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.978291 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.978302 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.978313 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.978324 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.978537 | orchestrator | 2025-09-17 00:52:42.978558 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-17 00:52:42.978569 | orchestrator | Wednesday 17 September 2025 00:42:10 +0000 (0:00:00.762) 0:00:24.005 *** 2025-09-17 00:52:42.978580 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.978591 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.978601 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.978612 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.978623 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.978633 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.978644 | orchestrator | 2025-09-17 00:52:42.978655 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-17 00:52:42.978665 | orchestrator | Wednesday 17 September 2025 00:42:10 +0000 (0:00:00.800) 0:00:24.805 *** 2025-09-17 00:52:42.978676 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.978686 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.978697 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.978707 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.978718 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.978729 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.978740 | orchestrator | 2025-09-17 00:52:42.978750 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-17 00:52:42.978761 | orchestrator | Wednesday 17 September 2025 00:42:11 +0000 (0:00:00.549) 0:00:25.355 *** 2025-09-17 00:52:42.978772 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.978782 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.978793 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.978803 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.978820 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.978832 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.978842 | orchestrator | 2025-09-17 00:52:42.978854 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-17 00:52:42.978864 | orchestrator | Wednesday 17 September 2025 00:42:12 +0000 (0:00:00.803) 0:00:26.159 *** 2025-09-17 00:52:42.978875 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.978885 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.978896 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.978982 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.978995 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.979006 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.979052 | orchestrator | 2025-09-17 00:52:42.979075 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-17 00:52:42.979087 | orchestrator | Wednesday 17 September 2025 00:42:12 +0000 (0:00:00.694) 0:00:26.854 *** 2025-09-17 00:52:42.979107 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.979185 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.979197 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.979208 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.979219 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.979229 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.979240 | orchestrator | 2025-09-17 00:52:42.979251 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-17 00:52:42.979262 | orchestrator | Wednesday 17 September 2025 00:42:14 +0000 (0:00:01.029) 0:00:27.883 *** 2025-09-17 00:52:42.979410 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.979421 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.979432 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.979443 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.979453 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.979464 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.979475 | orchestrator | 2025-09-17 00:52:42.979486 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-17 00:52:42.979496 | orchestrator | Wednesday 17 September 2025 00:42:14 +0000 (0:00:00.895) 0:00:28.778 *** 2025-09-17 00:52:42.979510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d-osd--block--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d', 'dm-uuid-LVM-r0ALozGhR2L6c4c7HnSkc1ujfUDnirHj3dZxSBnBOJf5ffjzIaJx0xH3iSe13R5t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1158166--3610--5fc1--bd8e--5288705939fa-osd--block--d1158166--3610--5fc1--bd8e--5288705939fa', 'dm-uuid-LVM-dR6rM0Kg4Yk1klH1e3rZpEqV5UEKMyR8IP6ZgT2lRWoV3IzM5QJnxvF0tIC8kqPz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac-osd--block--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac', 'dm-uuid-LVM-McC2YUMR0tAmxxPtPELmePGU9mXFtjqgGMi3eXu9ExMPGx9GB5MWg6FIUhVyKJBC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15-osd--block--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15', 'dm-uuid-LVM-5tViTeBQ8Oc8FV55WuseHuulgx8yDHyMvxg1WaUVs60eWQXhd242ptbzYumv4J0O'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2dc6576b--ad92--58b3--afc8--22b8ce20905e-osd--block--2dc6576b--ad92--58b3--afc8--22b8ce20905e', 'dm-uuid-LVM-pg03lch3KeYFVodEW4yidR22kwuRJWf4FMzgfXPuysxufP7dxlXYlkXK1PxX2k6x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7b5a8de--6218--5c80--971a--bac3422a4161-osd--block--a7b5a8de--6218--5c80--971a--bac3422a4161', 'dm-uuid-LVM-rAcXXceLpCWNdtel1qhxoK03BK36ONz0uhiweTSt4wUIKPoTcUAf36ISGrRTjdlw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.979997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb', 'scsi-SQEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part1', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part14', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part15', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part16', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac-osd--block--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WSmpyP-VGDL-Cazr-wGD7-fLQw-LGiy-vjHBIz', 'scsi-0QEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f', 'scsi-SQEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part1', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part14', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part15', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part16', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15-osd--block--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UQTxLh-k3Ne-eFGo-NHuZ-hAu5-qrVj-eddquS', 'scsi-0QEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb', 'scsi-SQEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a', 'scsi-SQEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d-osd--block--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QzUXag-mzmG-28Du-zQhp-kWL6-8Jlr-5JkD4t', 'scsi-0QEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4', 'scsi-SQEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d1158166--3610--5fc1--bd8e--5288705939fa-osd--block--d1158166--3610--5fc1--bd8e--5288705939fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PyuIpC-udvS-pGEe-yyK7-PyS9-dhMf-1dMXyQ', 'scsi-0QEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d', 'scsi-SQEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690', 'scsi-SQEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980426 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.980438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980448 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.980464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980498 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.980509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part1', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part14', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part15', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part16', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d', 'scsi-SQEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part1', 'scsi-SQEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part14', 'scsi-SQEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part15', 'scsi-SQEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part16', 'scsi-SQEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2dc6576b--ad92--58b3--afc8--22b8ce20905e-osd--block--2dc6576b--ad92--58b3--afc8--22b8ce20905e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w8FV3c-Boa2-FlG3-ELoA-Z810-NCtv-GGfCh5', 'scsi-0QEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9', 'scsi-SQEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a7b5a8de--6218--5c80--971a--bac3422a4161-osd--block--a7b5a8de--6218--5c80--971a--bac3422a4161'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i96O0I-6ZhJ-pW7N-qvO4-FBsN-X9RC-YPRKiL', 'scsi-0QEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e', 'scsi-SQEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7', 'scsi-SQEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980635 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.980646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:52:42.980785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f', 'scsi-SQEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980815 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.980834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:52:42.980846 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.980857 | orchestrator | 2025-09-17 00:52:42.980868 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-17 00:52:42.980879 | orchestrator | Wednesday 17 September 2025 00:42:16 +0000 (0:00:01.358) 0:00:30.137 *** 2025-09-17 00:52:42.980890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac-osd--block--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac', 'dm-uuid-LVM-McC2YUMR0tAmxxPtPELmePGU9mXFtjqgGMi3eXu9ExMPGx9GB5MWg6FIUhVyKJBC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.980961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15-osd--block--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15', 'dm-uuid-LVM-5tViTeBQ8Oc8FV55WuseHuulgx8yDHyMvxg1WaUVs60eWQXhd242ptbzYumv4J0O'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.980976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.980992 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981023 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981054 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981077 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981126 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d-osd--block--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d', 'dm-uuid-LVM-r0ALozGhR2L6c4c7HnSkc1ujfUDnirHj3dZxSBnBOJf5ffjzIaJx0xH3iSe13R5t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981175 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1158166--3610--5fc1--bd8e--5288705939fa-osd--block--d1158166--3610--5fc1--bd8e--5288705939fa', 'dm-uuid-LVM-dR6rM0Kg4Yk1klH1e3rZpEqV5UEKMyR8IP6ZgT2lRWoV3IzM5QJnxvF0tIC8kqPz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part1', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part14', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part15', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part16', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981305 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac-osd--block--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WSmpyP-VGDL-Cazr-wGD7-fLQw-LGiy-vjHBIz', 'scsi-0QEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f', 'scsi-SQEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981337 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15-osd--block--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UQTxLh-k3Ne-eFGo-NHuZ-hAu5-qrVj-eddquS', 'scsi-0QEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb', 'scsi-SQEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981362 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981373 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a', 'scsi-SQEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981409 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981419 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.981429 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981454 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981464 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981488 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part1', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part14', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part15', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part16', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981505 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d-osd--block--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QzUXag-mzmG-28Du-zQhp-kWL6-8Jlr-5JkD4t', 'scsi-0QEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4', 'scsi-SQEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d1158166--3610--5fc1--bd8e--5288705939fa-osd--block--d1158166--3610--5fc1--bd8e--5288705939fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PyuIpC-udvS-pGEe-yyK7-PyS9-dhMf-1dMXyQ', 'scsi-0QEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d', 'scsi-SQEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981532 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690', 'scsi-SQEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981549 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2dc6576b--ad92--58b3--afc8--22b8ce20905e-osd--block--2dc6576b--ad92--58b3--afc8--22b8ce20905e', 'dm-uuid-LVM-pg03lch3KeYFVodEW4yidR22kwuRJWf4FMzgfXPuysxufP7dxlXYlkXK1PxX2k6x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7b5a8de--6218--5c80--971a--bac3422a4161-osd--block--a7b5a8de--6218--5c80--971a--bac3422a4161', 'dm-uuid-LVM-rAcXXceLpCWNdtel1qhxoK03BK36ONz0uhiweTSt4wUIKPoTcUAf36ISGrRTjdlw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981603 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.981613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981640 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981651 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981665 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981676 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981691 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981702 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981712 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981922 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981940 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981974 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.981984 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982003 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part1', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part14', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part15', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part16', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982065 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982087 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb', 'scsi-SQEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_a7061de2-0566-4272-9d34-57a6f035e6cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982106 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2dc6576b--ad92--58b3--afc8--22b8ce20905e-osd--block--2dc6576b--ad92--58b3--afc8--22b8ce20905e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w8FV3c-Boa2-FlG3-ELoA-Z810-NCtv-GGfCh5', 'scsi-0QEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9', 'scsi-SQEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982123 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a7b5a8de--6218--5c80--971a--bac3422a4161-osd--block--a7b5a8de--6218--5c80--971a--bac3422a4161'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i96O0I-6ZhJ-pW7N-qvO4-FBsN-X9RC-YPRKiL', 'scsi-0QEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e', 'scsi-SQEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982150 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7', 'scsi-SQEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982160 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.982174 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982185 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982199 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982216 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982226 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982237 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982247 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982264 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982275 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982291 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.982306 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d', 'scsi-SQEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part1', 'scsi-SQEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part14', 'scsi-SQEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part15', 'scsi-SQEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part16', 'scsi-SQEMU_QEMU_HARDDISK_a65335ad-556c-497a-b79b-8ac858b0e80d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982317 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982327 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.982342 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982357 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982373 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982383 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982393 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982403 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982418 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982428 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982451 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f', 'scsi-SQEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9d09841-f300-4329-a2ac-b45b236de72f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982462 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:52:42.982473 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.982483 | orchestrator | 2025-09-17 00:52:42.982494 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-17 00:52:42.982505 | orchestrator | Wednesday 17 September 2025 00:42:17 +0000 (0:00:00.881) 0:00:31.019 *** 2025-09-17 00:52:42.982521 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.982533 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.982544 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.982554 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.982565 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.982576 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.982593 | orchestrator | 2025-09-17 00:52:42.982604 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-17 00:52:42.982615 | orchestrator | Wednesday 17 September 2025 00:42:18 +0000 (0:00:01.155) 0:00:32.174 *** 2025-09-17 00:52:42.982625 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.982636 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.982646 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.982657 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.982667 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.982678 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.982688 | orchestrator | 2025-09-17 00:52:42.982699 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-17 00:52:42.982710 | orchestrator | Wednesday 17 September 2025 00:42:18 +0000 (0:00:00.642) 0:00:32.817 *** 2025-09-17 00:52:42.982721 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.982732 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.982743 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.982753 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.982764 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.982774 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.982786 | orchestrator | 2025-09-17 00:52:42.982796 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-17 00:52:42.982807 | orchestrator | Wednesday 17 September 2025 00:42:19 +0000 (0:00:00.595) 0:00:33.413 *** 2025-09-17 00:52:42.982822 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.982833 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.982844 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.982854 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.982863 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.982872 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.982882 | orchestrator | 2025-09-17 00:52:42.982891 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-17 00:52:42.982901 | orchestrator | Wednesday 17 September 2025 00:42:20 +0000 (0:00:00.481) 0:00:33.895 *** 2025-09-17 00:52:42.982976 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.982985 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.982995 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.983004 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.983014 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.983023 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.983033 | orchestrator | 2025-09-17 00:52:42.983042 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-17 00:52:42.983052 | orchestrator | Wednesday 17 September 2025 00:42:20 +0000 (0:00:00.730) 0:00:34.625 *** 2025-09-17 00:52:42.983061 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.983071 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.983080 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.983089 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.983099 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.983108 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.983117 | orchestrator | 2025-09-17 00:52:42.983127 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-17 00:52:42.983137 | orchestrator | Wednesday 17 September 2025 00:42:21 +0000 (0:00:01.001) 0:00:35.627 *** 2025-09-17 00:52:42.983146 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-17 00:52:42.983156 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-17 00:52:42.983166 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-17 00:52:42.983175 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-17 00:52:42.983184 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-17 00:52:42.983194 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 00:52:42.983203 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-17 00:52:42.983213 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-17 00:52:42.983229 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-17 00:52:42.983239 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-17 00:52:42.983248 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-17 00:52:42.983257 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-17 00:52:42.983267 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-17 00:52:42.983276 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-17 00:52:42.983285 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-17 00:52:42.983295 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-17 00:52:42.983304 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-17 00:52:42.983314 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-17 00:52:42.983323 | orchestrator | 2025-09-17 00:52:42.983333 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-17 00:52:42.983343 | orchestrator | Wednesday 17 September 2025 00:42:25 +0000 (0:00:03.856) 0:00:39.483 *** 2025-09-17 00:52:42.983352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-17 00:52:42.983362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-17 00:52:42.983371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-17 00:52:42.983381 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.983390 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-17 00:52:42.983399 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-17 00:52:42.983409 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-17 00:52:42.983418 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.983428 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-17 00:52:42.983437 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-17 00:52:42.983452 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-17 00:52:42.983461 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.983471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 00:52:42.983481 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 00:52:42.983490 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 00:52:42.983499 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.983509 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-17 00:52:42.983518 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-17 00:52:42.983528 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-17 00:52:42.983537 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-17 00:52:42.983546 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-17 00:52:42.983556 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.983565 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-17 00:52:42.983575 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.983584 | orchestrator | 2025-09-17 00:52:42.983594 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-17 00:52:42.983603 | orchestrator | Wednesday 17 September 2025 00:42:26 +0000 (0:00:01.350) 0:00:40.833 *** 2025-09-17 00:52:42.983613 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.983622 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.983632 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.983650 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.983660 | orchestrator | 2025-09-17 00:52:42.983670 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-17 00:52:42.983680 | orchestrator | Wednesday 17 September 2025 00:42:28 +0000 (0:00:01.127) 0:00:41.961 *** 2025-09-17 00:52:42.983695 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.983705 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.983714 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.983723 | orchestrator | 2025-09-17 00:52:42.983733 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-17 00:52:42.983743 | orchestrator | Wednesday 17 September 2025 00:42:28 +0000 (0:00:00.577) 0:00:42.538 *** 2025-09-17 00:52:42.983752 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.983762 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.983771 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.983780 | orchestrator | 2025-09-17 00:52:42.983790 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-17 00:52:42.983800 | orchestrator | Wednesday 17 September 2025 00:42:29 +0000 (0:00:00.338) 0:00:42.877 *** 2025-09-17 00:52:42.983809 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.983818 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.983828 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.983837 | orchestrator | 2025-09-17 00:52:42.983846 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-17 00:52:42.983856 | orchestrator | Wednesday 17 September 2025 00:42:30 +0000 (0:00:00.994) 0:00:43.871 *** 2025-09-17 00:52:42.983866 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.983875 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.983884 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.983894 | orchestrator | 2025-09-17 00:52:42.983920 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-17 00:52:42.983931 | orchestrator | Wednesday 17 September 2025 00:42:30 +0000 (0:00:00.620) 0:00:44.491 *** 2025-09-17 00:52:42.983940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.983950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.983959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.983969 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.983978 | orchestrator | 2025-09-17 00:52:42.983988 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-17 00:52:42.983998 | orchestrator | Wednesday 17 September 2025 00:42:31 +0000 (0:00:00.390) 0:00:44.882 *** 2025-09-17 00:52:42.984007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.984017 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.984026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.984036 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.984045 | orchestrator | 2025-09-17 00:52:42.984055 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-17 00:52:42.984064 | orchestrator | Wednesday 17 September 2025 00:42:31 +0000 (0:00:00.347) 0:00:45.230 *** 2025-09-17 00:52:42.984073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.984083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.984092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.984102 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.984111 | orchestrator | 2025-09-17 00:52:42.984121 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-17 00:52:42.984130 | orchestrator | Wednesday 17 September 2025 00:42:31 +0000 (0:00:00.335) 0:00:45.565 *** 2025-09-17 00:52:42.984140 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.984149 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.984159 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.984168 | orchestrator | 2025-09-17 00:52:42.984178 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-17 00:52:42.984187 | orchestrator | Wednesday 17 September 2025 00:42:32 +0000 (0:00:00.436) 0:00:46.001 *** 2025-09-17 00:52:42.984196 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-17 00:52:42.984212 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-17 00:52:42.984222 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-17 00:52:42.984231 | orchestrator | 2025-09-17 00:52:42.984245 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-17 00:52:42.984255 | orchestrator | Wednesday 17 September 2025 00:42:33 +0000 (0:00:01.018) 0:00:47.020 *** 2025-09-17 00:52:42.984265 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 00:52:42.984275 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 00:52:42.984285 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 00:52:42.984294 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-17 00:52:42.984304 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-17 00:52:42.984313 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-17 00:52:42.984323 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-17 00:52:42.984332 | orchestrator | 2025-09-17 00:52:42.984342 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-17 00:52:42.984351 | orchestrator | Wednesday 17 September 2025 00:42:34 +0000 (0:00:00.942) 0:00:47.963 *** 2025-09-17 00:52:42.984361 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 00:52:42.984370 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 00:52:42.984384 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 00:52:42.984393 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-17 00:52:42.984403 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-17 00:52:42.984413 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-17 00:52:42.984422 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-17 00:52:42.984432 | orchestrator | 2025-09-17 00:52:42.984441 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 00:52:42.984451 | orchestrator | Wednesday 17 September 2025 00:42:36 +0000 (0:00:02.164) 0:00:50.128 *** 2025-09-17 00:52:42.984461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.984471 | orchestrator | 2025-09-17 00:52:42.984480 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 00:52:42.984490 | orchestrator | Wednesday 17 September 2025 00:42:37 +0000 (0:00:01.626) 0:00:51.754 *** 2025-09-17 00:52:42.984500 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.984509 | orchestrator | 2025-09-17 00:52:42.984519 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 00:52:42.984528 | orchestrator | Wednesday 17 September 2025 00:42:39 +0000 (0:00:01.226) 0:00:52.981 *** 2025-09-17 00:52:42.984537 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.984547 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.984556 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.984566 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.984575 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.984585 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.984594 | orchestrator | 2025-09-17 00:52:42.984604 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 00:52:42.984613 | orchestrator | Wednesday 17 September 2025 00:42:40 +0000 (0:00:01.091) 0:00:54.073 *** 2025-09-17 00:52:42.984629 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.984638 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.984648 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.984657 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.984667 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.984676 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.984686 | orchestrator | 2025-09-17 00:52:42.984696 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 00:52:42.984705 | orchestrator | Wednesday 17 September 2025 00:42:41 +0000 (0:00:01.017) 0:00:55.090 *** 2025-09-17 00:52:42.984715 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.984724 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.984733 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.984743 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.984752 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.984762 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.984771 | orchestrator | 2025-09-17 00:52:42.984781 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 00:52:42.984790 | orchestrator | Wednesday 17 September 2025 00:42:42 +0000 (0:00:01.366) 0:00:56.457 *** 2025-09-17 00:52:42.984800 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.984809 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.984819 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.984828 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.984838 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.984847 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.984856 | orchestrator | 2025-09-17 00:52:42.984866 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 00:52:42.984876 | orchestrator | Wednesday 17 September 2025 00:42:43 +0000 (0:00:00.821) 0:00:57.278 *** 2025-09-17 00:52:42.984885 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.984894 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.984955 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.984967 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.984976 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.984986 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.984996 | orchestrator | 2025-09-17 00:52:42.985005 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 00:52:42.985020 | orchestrator | Wednesday 17 September 2025 00:42:44 +0000 (0:00:01.452) 0:00:58.731 *** 2025-09-17 00:52:42.985030 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.985039 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.985049 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.985058 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.985068 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.985077 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.985086 | orchestrator | 2025-09-17 00:52:42.985094 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 00:52:42.985101 | orchestrator | Wednesday 17 September 2025 00:42:45 +0000 (0:00:00.867) 0:00:59.599 *** 2025-09-17 00:52:42.985109 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.985117 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.985124 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.985132 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.985140 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.985147 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.985155 | orchestrator | 2025-09-17 00:52:42.985163 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 00:52:42.985171 | orchestrator | Wednesday 17 September 2025 00:42:46 +0000 (0:00:01.083) 0:01:00.682 *** 2025-09-17 00:52:42.985178 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.985186 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.985194 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.985201 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.985215 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.985223 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.985231 | orchestrator | 2025-09-17 00:52:42.985243 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 00:52:42.985251 | orchestrator | Wednesday 17 September 2025 00:42:48 +0000 (0:00:01.326) 0:01:02.009 *** 2025-09-17 00:52:42.985259 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.985267 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.985274 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.985282 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.985290 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.985297 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.985305 | orchestrator | 2025-09-17 00:52:42.985313 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 00:52:42.985321 | orchestrator | Wednesday 17 September 2025 00:42:49 +0000 (0:00:01.637) 0:01:03.646 *** 2025-09-17 00:52:42.985328 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.985336 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.985344 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.985352 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.985359 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.985367 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.985375 | orchestrator | 2025-09-17 00:52:42.985382 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 00:52:42.985390 | orchestrator | Wednesday 17 September 2025 00:42:50 +0000 (0:00:00.820) 0:01:04.466 *** 2025-09-17 00:52:42.985398 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.985406 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.985413 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.985421 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.985429 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.985436 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.985444 | orchestrator | 2025-09-17 00:52:42.985452 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 00:52:42.985460 | orchestrator | Wednesday 17 September 2025 00:42:51 +0000 (0:00:00.805) 0:01:05.272 *** 2025-09-17 00:52:42.985468 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.985475 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.985483 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.985491 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.985499 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.985506 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.985514 | orchestrator | 2025-09-17 00:52:42.985522 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 00:52:42.985530 | orchestrator | Wednesday 17 September 2025 00:42:52 +0000 (0:00:00.760) 0:01:06.033 *** 2025-09-17 00:52:42.985537 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.985545 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.985553 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.985560 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.985568 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.985576 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.985584 | orchestrator | 2025-09-17 00:52:42.985591 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 00:52:42.985599 | orchestrator | Wednesday 17 September 2025 00:42:52 +0000 (0:00:00.666) 0:01:06.699 *** 2025-09-17 00:52:42.985607 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.985615 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.985622 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.985630 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.985638 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.985646 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.985653 | orchestrator | 2025-09-17 00:52:42.985661 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 00:52:42.985669 | orchestrator | Wednesday 17 September 2025 00:42:53 +0000 (0:00:00.833) 0:01:07.533 *** 2025-09-17 00:52:42.985681 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.985689 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.985697 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.985704 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.985712 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.985720 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.985727 | orchestrator | 2025-09-17 00:52:42.985735 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 00:52:42.985743 | orchestrator | Wednesday 17 September 2025 00:42:54 +0000 (0:00:00.543) 0:01:08.077 *** 2025-09-17 00:52:42.985750 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.985758 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.985766 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.985774 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.985781 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.985789 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.985797 | orchestrator | 2025-09-17 00:52:42.985808 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 00:52:42.985816 | orchestrator | Wednesday 17 September 2025 00:42:54 +0000 (0:00:00.617) 0:01:08.694 *** 2025-09-17 00:52:42.985824 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.985832 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.985839 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.985847 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.985855 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.985862 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.985870 | orchestrator | 2025-09-17 00:52:42.985878 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 00:52:42.985886 | orchestrator | Wednesday 17 September 2025 00:42:55 +0000 (0:00:00.524) 0:01:09.218 *** 2025-09-17 00:52:42.985894 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.985914 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.985922 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.985930 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.985938 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.985945 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.985953 | orchestrator | 2025-09-17 00:52:42.985961 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 00:52:42.985969 | orchestrator | Wednesday 17 September 2025 00:42:56 +0000 (0:00:00.762) 0:01:09.981 *** 2025-09-17 00:52:42.985976 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.985984 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.985992 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.985999 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.986007 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.986043 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.986053 | orchestrator | 2025-09-17 00:52:42.986065 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-17 00:52:42.986073 | orchestrator | Wednesday 17 September 2025 00:42:57 +0000 (0:00:01.177) 0:01:11.158 *** 2025-09-17 00:52:42.986081 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.986089 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.986097 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.986104 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.986112 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.986120 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.986127 | orchestrator | 2025-09-17 00:52:42.986135 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-17 00:52:42.986143 | orchestrator | Wednesday 17 September 2025 00:42:58 +0000 (0:00:01.532) 0:01:12.691 *** 2025-09-17 00:52:42.986151 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.986159 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.986167 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.986180 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.986187 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.986195 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.986203 | orchestrator | 2025-09-17 00:52:42.986211 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-17 00:52:42.986219 | orchestrator | Wednesday 17 September 2025 00:43:00 +0000 (0:00:02.173) 0:01:14.864 *** 2025-09-17 00:52:42.986227 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.986234 | orchestrator | 2025-09-17 00:52:42.986242 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-17 00:52:42.986250 | orchestrator | Wednesday 17 September 2025 00:43:01 +0000 (0:00:00.965) 0:01:15.830 *** 2025-09-17 00:52:42.986257 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.986265 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.986273 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.986281 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.986288 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.986296 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.986303 | orchestrator | 2025-09-17 00:52:42.986311 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-17 00:52:42.986319 | orchestrator | Wednesday 17 September 2025 00:43:02 +0000 (0:00:00.526) 0:01:16.356 *** 2025-09-17 00:52:42.986327 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.986334 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.986342 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.986349 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.986357 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.986365 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.986373 | orchestrator | 2025-09-17 00:52:42.986380 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-17 00:52:42.986388 | orchestrator | Wednesday 17 September 2025 00:43:03 +0000 (0:00:00.620) 0:01:16.977 *** 2025-09-17 00:52:42.986396 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 00:52:42.986404 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 00:52:42.986411 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 00:52:42.986419 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 00:52:42.986427 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 00:52:42.986435 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 00:52:42.986442 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 00:52:42.986450 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 00:52:42.986458 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 00:52:42.986466 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 00:52:42.986473 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 00:52:42.986494 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 00:52:42.986502 | orchestrator | 2025-09-17 00:52:42.986510 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-17 00:52:42.986518 | orchestrator | Wednesday 17 September 2025 00:43:04 +0000 (0:00:01.231) 0:01:18.209 *** 2025-09-17 00:52:42.986525 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.986533 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.986541 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.986548 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.986561 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.986569 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.986577 | orchestrator | 2025-09-17 00:52:42.986585 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-17 00:52:42.986593 | orchestrator | Wednesday 17 September 2025 00:43:05 +0000 (0:00:01.065) 0:01:19.275 *** 2025-09-17 00:52:42.986600 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.986608 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.986616 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.986623 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.986631 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.986639 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.986646 | orchestrator | 2025-09-17 00:52:42.986654 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-17 00:52:42.986662 | orchestrator | Wednesday 17 September 2025 00:43:06 +0000 (0:00:00.656) 0:01:19.931 *** 2025-09-17 00:52:42.986670 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.986678 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.986689 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.986697 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.986705 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.986736 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.986744 | orchestrator | 2025-09-17 00:52:42.986752 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-17 00:52:42.986760 | orchestrator | Wednesday 17 September 2025 00:43:06 +0000 (0:00:00.833) 0:01:20.765 *** 2025-09-17 00:52:42.986768 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.986775 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.986783 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.986791 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.986798 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.986806 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.986814 | orchestrator | 2025-09-17 00:52:42.986822 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-17 00:52:42.986830 | orchestrator | Wednesday 17 September 2025 00:43:07 +0000 (0:00:00.555) 0:01:21.320 *** 2025-09-17 00:52:42.986838 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.986846 | orchestrator | 2025-09-17 00:52:42.986853 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-17 00:52:42.986861 | orchestrator | Wednesday 17 September 2025 00:43:08 +0000 (0:00:01.154) 0:01:22.475 *** 2025-09-17 00:52:42.986869 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.986877 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.986884 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.986892 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.986900 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.986922 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.986930 | orchestrator | 2025-09-17 00:52:42.986939 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-17 00:52:42.986947 | orchestrator | Wednesday 17 September 2025 00:44:10 +0000 (0:01:02.285) 0:02:24.760 *** 2025-09-17 00:52:42.986955 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 00:52:42.986962 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 00:52:42.986970 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 00:52:42.986978 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.986986 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 00:52:42.986994 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 00:52:42.987001 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 00:52:42.987015 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.987023 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 00:52:42.987031 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 00:52:42.987039 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 00:52:42.987047 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.987054 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 00:52:42.987062 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 00:52:42.987070 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 00:52:42.987078 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.987086 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 00:52:42.987094 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 00:52:42.987102 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 00:52:42.987109 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.987117 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 00:52:42.987129 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 00:52:42.987138 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 00:52:42.987146 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.987154 | orchestrator | 2025-09-17 00:52:42.987161 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-17 00:52:42.987169 | orchestrator | Wednesday 17 September 2025 00:44:11 +0000 (0:00:00.807) 0:02:25.567 *** 2025-09-17 00:52:42.987177 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.987185 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.987192 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.987200 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.987208 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.987216 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.987223 | orchestrator | 2025-09-17 00:52:42.987231 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-17 00:52:42.987239 | orchestrator | Wednesday 17 September 2025 00:44:12 +0000 (0:00:00.885) 0:02:26.453 *** 2025-09-17 00:52:42.987247 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.987254 | orchestrator | 2025-09-17 00:52:42.987262 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-17 00:52:42.987270 | orchestrator | Wednesday 17 September 2025 00:44:12 +0000 (0:00:00.114) 0:02:26.567 *** 2025-09-17 00:52:42.987278 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.987285 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.987293 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.987301 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.987308 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.987316 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.987324 | orchestrator | 2025-09-17 00:52:42.987336 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-17 00:52:42.987344 | orchestrator | Wednesday 17 September 2025 00:44:13 +0000 (0:00:00.527) 0:02:27.095 *** 2025-09-17 00:52:42.987352 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.987360 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.987367 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.987375 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.987382 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.987390 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.987398 | orchestrator | 2025-09-17 00:52:42.987406 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-17 00:52:42.987419 | orchestrator | Wednesday 17 September 2025 00:44:13 +0000 (0:00:00.611) 0:02:27.706 *** 2025-09-17 00:52:42.987426 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.987434 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.987442 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.987449 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.987457 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.987465 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.987472 | orchestrator | 2025-09-17 00:52:42.987480 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-17 00:52:42.987488 | orchestrator | Wednesday 17 September 2025 00:44:14 +0000 (0:00:00.560) 0:02:28.266 *** 2025-09-17 00:52:42.987496 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.987504 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.987512 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.987519 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.987528 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.987535 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.987543 | orchestrator | 2025-09-17 00:52:42.987551 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-17 00:52:42.987559 | orchestrator | Wednesday 17 September 2025 00:44:16 +0000 (0:00:02.408) 0:02:30.674 *** 2025-09-17 00:52:42.987567 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.987574 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.987582 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.987590 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.987597 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.987605 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.987613 | orchestrator | 2025-09-17 00:52:42.987621 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-17 00:52:42.987629 | orchestrator | Wednesday 17 September 2025 00:44:17 +0000 (0:00:00.544) 0:02:31.219 *** 2025-09-17 00:52:42.987637 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.987646 | orchestrator | 2025-09-17 00:52:42.987654 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-17 00:52:42.987661 | orchestrator | Wednesday 17 September 2025 00:44:18 +0000 (0:00:01.061) 0:02:32.280 *** 2025-09-17 00:52:42.987669 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.987677 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.987685 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.987692 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.987700 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.987708 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.987716 | orchestrator | 2025-09-17 00:52:42.987724 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-17 00:52:42.987731 | orchestrator | Wednesday 17 September 2025 00:44:19 +0000 (0:00:00.656) 0:02:32.937 *** 2025-09-17 00:52:42.987739 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.987747 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.987755 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.987762 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.987770 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.987778 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.987785 | orchestrator | 2025-09-17 00:52:42.987793 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-17 00:52:42.987801 | orchestrator | Wednesday 17 September 2025 00:44:19 +0000 (0:00:00.510) 0:02:33.447 *** 2025-09-17 00:52:42.987809 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.987816 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.987824 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.987832 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.987848 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.987860 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.987868 | orchestrator | 2025-09-17 00:52:42.987876 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-17 00:52:42.987885 | orchestrator | Wednesday 17 September 2025 00:44:20 +0000 (0:00:00.643) 0:02:34.090 *** 2025-09-17 00:52:42.987892 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.987900 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.987949 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.987957 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.987965 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.987973 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.987980 | orchestrator | 2025-09-17 00:52:42.987988 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-17 00:52:42.987996 | orchestrator | Wednesday 17 September 2025 00:44:20 +0000 (0:00:00.512) 0:02:34.603 *** 2025-09-17 00:52:42.988004 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.988011 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.988019 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.988027 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.988034 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.988042 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.988050 | orchestrator | 2025-09-17 00:52:42.988058 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-17 00:52:42.988065 | orchestrator | Wednesday 17 September 2025 00:44:21 +0000 (0:00:00.714) 0:02:35.317 *** 2025-09-17 00:52:42.988073 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.988081 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.988088 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.988101 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.988108 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.988116 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.988124 | orchestrator | 2025-09-17 00:52:42.988132 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-17 00:52:42.988140 | orchestrator | Wednesday 17 September 2025 00:44:21 +0000 (0:00:00.545) 0:02:35.863 *** 2025-09-17 00:52:42.988147 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.988155 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.988163 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.988170 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.988178 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.988186 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.988193 | orchestrator | 2025-09-17 00:52:42.988201 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-17 00:52:42.988209 | orchestrator | Wednesday 17 September 2025 00:44:22 +0000 (0:00:00.684) 0:02:36.548 *** 2025-09-17 00:52:42.988217 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.988224 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.988232 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.988240 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.988247 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.988255 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.988263 | orchestrator | 2025-09-17 00:52:42.988271 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-17 00:52:42.988279 | orchestrator | Wednesday 17 September 2025 00:44:23 +0000 (0:00:00.549) 0:02:37.097 *** 2025-09-17 00:52:42.988286 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.988294 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.988302 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.988310 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.988317 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.988325 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.988333 | orchestrator | 2025-09-17 00:52:42.988340 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-17 00:52:42.988354 | orchestrator | Wednesday 17 September 2025 00:44:24 +0000 (0:00:00.999) 0:02:38.097 *** 2025-09-17 00:52:42.988362 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.988370 | orchestrator | 2025-09-17 00:52:42.988378 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-17 00:52:42.988385 | orchestrator | Wednesday 17 September 2025 00:44:25 +0000 (0:00:01.153) 0:02:39.250 *** 2025-09-17 00:52:42.988393 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-17 00:52:42.988401 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-17 00:52:42.988409 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-17 00:52:42.988417 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-17 00:52:42.988425 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-17 00:52:42.988433 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-17 00:52:42.988440 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-17 00:52:42.988448 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-17 00:52:42.988456 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-17 00:52:42.988462 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-17 00:52:42.988469 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-17 00:52:42.988476 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-17 00:52:42.988482 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-17 00:52:42.988489 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-17 00:52:42.988495 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-17 00:52:42.988502 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-17 00:52:42.988509 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-17 00:52:42.988515 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-17 00:52:42.988522 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-17 00:52:42.988529 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-17 00:52:42.988539 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-17 00:52:42.988546 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-17 00:52:42.988552 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-17 00:52:42.988559 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-17 00:52:42.988565 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-17 00:52:42.988572 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-17 00:52:42.988578 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-17 00:52:42.988585 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-17 00:52:42.988591 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-17 00:52:42.988598 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-17 00:52:42.988604 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-17 00:52:42.988611 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-17 00:52:42.988618 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-17 00:52:42.988624 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-17 00:52:42.988631 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-17 00:52:42.988637 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-17 00:52:42.988644 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-17 00:52:42.988654 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-17 00:52:42.988665 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-17 00:52:42.988672 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-17 00:52:42.988678 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-17 00:52:42.988685 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-17 00:52:42.988692 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 00:52:42.988698 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-17 00:52:42.988705 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-17 00:52:42.988711 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-17 00:52:42.988718 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-17 00:52:42.988724 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-17 00:52:42.988731 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 00:52:42.988738 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 00:52:42.988744 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 00:52:42.988751 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-17 00:52:42.988757 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 00:52:42.988764 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 00:52:42.988770 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 00:52:42.988777 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 00:52:42.988783 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 00:52:42.988790 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 00:52:42.988796 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 00:52:42.988803 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 00:52:42.988809 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 00:52:42.988816 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 00:52:42.988822 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 00:52:42.988829 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 00:52:42.988835 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 00:52:42.988842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 00:52:42.988848 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 00:52:42.988855 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 00:52:42.988862 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 00:52:42.988868 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 00:52:42.988875 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 00:52:42.988881 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 00:52:42.988888 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 00:52:42.988894 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 00:52:42.988901 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 00:52:42.988936 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 00:52:42.988942 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 00:52:42.988949 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-17 00:52:42.988967 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 00:52:42.988974 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 00:52:42.988981 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 00:52:42.988987 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 00:52:42.988994 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 00:52:42.989001 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-17 00:52:42.989007 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 00:52:42.989014 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-17 00:52:42.989020 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-17 00:52:42.989027 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-17 00:52:42.989034 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 00:52:42.989040 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-17 00:52:42.989047 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-17 00:52:42.989054 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-17 00:52:42.989060 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-17 00:52:42.989067 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-17 00:52:42.989077 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-17 00:52:42.989084 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-17 00:52:42.989090 | orchestrator | 2025-09-17 00:52:42.989097 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-17 00:52:42.989104 | orchestrator | Wednesday 17 September 2025 00:44:31 +0000 (0:00:06.442) 0:02:45.693 *** 2025-09-17 00:52:42.989110 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989117 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989124 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989130 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.989137 | orchestrator | 2025-09-17 00:52:42.989144 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-17 00:52:42.989150 | orchestrator | Wednesday 17 September 2025 00:44:33 +0000 (0:00:01.320) 0:02:47.013 *** 2025-09-17 00:52:42.989157 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:42.989165 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:42.989171 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:42.989178 | orchestrator | 2025-09-17 00:52:42.989185 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-17 00:52:42.989191 | orchestrator | Wednesday 17 September 2025 00:44:33 +0000 (0:00:00.674) 0:02:47.687 *** 2025-09-17 00:52:42.989198 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:42.989205 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:42.989211 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:42.989218 | orchestrator | 2025-09-17 00:52:42.989225 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-17 00:52:42.989232 | orchestrator | Wednesday 17 September 2025 00:44:35 +0000 (0:00:01.655) 0:02:49.343 *** 2025-09-17 00:52:42.989242 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.989249 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.989256 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989263 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.989269 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989276 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989282 | orchestrator | 2025-09-17 00:52:42.989289 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-17 00:52:42.989296 | orchestrator | Wednesday 17 September 2025 00:44:36 +0000 (0:00:00.827) 0:02:50.170 *** 2025-09-17 00:52:42.989303 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.989309 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.989316 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.989323 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989329 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989336 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989342 | orchestrator | 2025-09-17 00:52:42.989349 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-17 00:52:42.989356 | orchestrator | Wednesday 17 September 2025 00:44:37 +0000 (0:00:00.991) 0:02:51.161 *** 2025-09-17 00:52:42.989362 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.989369 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.989376 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.989382 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989389 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989396 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989402 | orchestrator | 2025-09-17 00:52:42.989409 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-17 00:52:42.989416 | orchestrator | Wednesday 17 September 2025 00:44:37 +0000 (0:00:00.529) 0:02:51.691 *** 2025-09-17 00:52:42.989427 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.989434 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.989440 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.989447 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989453 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989460 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989466 | orchestrator | 2025-09-17 00:52:42.989473 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-17 00:52:42.989480 | orchestrator | Wednesday 17 September 2025 00:44:38 +0000 (0:00:00.661) 0:02:52.352 *** 2025-09-17 00:52:42.989486 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.989493 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.989500 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.989506 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989513 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989520 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989526 | orchestrator | 2025-09-17 00:52:42.989533 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-17 00:52:42.989540 | orchestrator | Wednesday 17 September 2025 00:44:38 +0000 (0:00:00.497) 0:02:52.850 *** 2025-09-17 00:52:42.989547 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.989553 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.989560 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.989566 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989573 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989579 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989586 | orchestrator | 2025-09-17 00:52:42.989596 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-17 00:52:42.989603 | orchestrator | Wednesday 17 September 2025 00:44:39 +0000 (0:00:00.595) 0:02:53.446 *** 2025-09-17 00:52:42.989610 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.989617 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.989628 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.989634 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989641 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989648 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989654 | orchestrator | 2025-09-17 00:52:42.989661 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-17 00:52:42.989668 | orchestrator | Wednesday 17 September 2025 00:44:40 +0000 (0:00:00.536) 0:02:53.982 *** 2025-09-17 00:52:42.989674 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.989681 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.989687 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.989694 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989700 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989707 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989714 | orchestrator | 2025-09-17 00:52:42.989720 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-17 00:52:42.989727 | orchestrator | Wednesday 17 September 2025 00:44:41 +0000 (0:00:00.930) 0:02:54.913 *** 2025-09-17 00:52:42.989734 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989740 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989747 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989753 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.989760 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.989766 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.989773 | orchestrator | 2025-09-17 00:52:42.989780 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-17 00:52:42.989786 | orchestrator | Wednesday 17 September 2025 00:44:43 +0000 (0:00:02.859) 0:02:57.772 *** 2025-09-17 00:52:42.989793 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.989800 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.989806 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.989813 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989819 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989826 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989833 | orchestrator | 2025-09-17 00:52:42.989839 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-17 00:52:42.989846 | orchestrator | Wednesday 17 September 2025 00:44:44 +0000 (0:00:00.648) 0:02:58.421 *** 2025-09-17 00:52:42.989852 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.989859 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.989866 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.989873 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989879 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989886 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989892 | orchestrator | 2025-09-17 00:52:42.989899 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-17 00:52:42.989921 | orchestrator | Wednesday 17 September 2025 00:44:45 +0000 (0:00:00.514) 0:02:58.936 *** 2025-09-17 00:52:42.989928 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.989934 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.989941 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.989948 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.989954 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.989961 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.989967 | orchestrator | 2025-09-17 00:52:42.989974 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-17 00:52:42.989981 | orchestrator | Wednesday 17 September 2025 00:44:45 +0000 (0:00:00.604) 0:02:59.540 *** 2025-09-17 00:52:42.989987 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:42.989994 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:42.990005 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:42.990012 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.990045 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.990052 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.990059 | orchestrator | 2025-09-17 00:52:42.990070 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-17 00:52:42.990077 | orchestrator | Wednesday 17 September 2025 00:44:46 +0000 (0:00:00.966) 0:03:00.507 *** 2025-09-17 00:52:42.990084 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-17 00:52:42.990094 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-17 00:52:42.990102 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-17 00:52:42.990112 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-17 00:52:42.990119 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.990126 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-17 00:52:42.990134 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-17 00:52:42.990141 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.990147 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.990154 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.990160 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.990167 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.990174 | orchestrator | 2025-09-17 00:52:42.990181 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-17 00:52:42.990187 | orchestrator | Wednesday 17 September 2025 00:44:47 +0000 (0:00:01.072) 0:03:01.579 *** 2025-09-17 00:52:42.990194 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.990201 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.990207 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.990214 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.990221 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.990227 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.990234 | orchestrator | 2025-09-17 00:52:42.990240 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-17 00:52:42.990247 | orchestrator | Wednesday 17 September 2025 00:44:48 +0000 (0:00:00.888) 0:03:02.467 *** 2025-09-17 00:52:42.990254 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.990265 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.990272 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.990278 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.990285 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.990292 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.990298 | orchestrator | 2025-09-17 00:52:42.990305 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-17 00:52:42.990312 | orchestrator | Wednesday 17 September 2025 00:44:49 +0000 (0:00:00.420) 0:03:02.888 *** 2025-09-17 00:52:42.990318 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.990325 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.990331 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.990338 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.990344 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.990351 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.990358 | orchestrator | 2025-09-17 00:52:42.990364 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-17 00:52:42.990371 | orchestrator | Wednesday 17 September 2025 00:44:49 +0000 (0:00:00.657) 0:03:03.545 *** 2025-09-17 00:52:42.990377 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.990384 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.990390 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.990397 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.990404 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.990410 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.990417 | orchestrator | 2025-09-17 00:52:42.990423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-17 00:52:42.990430 | orchestrator | Wednesday 17 September 2025 00:44:50 +0000 (0:00:00.507) 0:03:04.053 *** 2025-09-17 00:52:42.990437 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.990453 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.990460 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.990467 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.990473 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.990480 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.990486 | orchestrator | 2025-09-17 00:52:42.990493 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-17 00:52:42.990500 | orchestrator | Wednesday 17 September 2025 00:44:50 +0000 (0:00:00.706) 0:03:04.760 *** 2025-09-17 00:52:42.990506 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.990513 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.990520 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.990526 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.990533 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.990539 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.990546 | orchestrator | 2025-09-17 00:52:42.990552 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-17 00:52:42.990559 | orchestrator | Wednesday 17 September 2025 00:44:51 +0000 (0:00:00.931) 0:03:05.692 *** 2025-09-17 00:52:42.990566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.990572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.990579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.990585 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.990592 | orchestrator | 2025-09-17 00:52:42.990598 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-17 00:52:42.990605 | orchestrator | Wednesday 17 September 2025 00:44:52 +0000 (0:00:00.539) 0:03:06.231 *** 2025-09-17 00:52:42.990615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.990622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.990628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.990640 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.990647 | orchestrator | 2025-09-17 00:52:42.990654 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-17 00:52:42.990660 | orchestrator | Wednesday 17 September 2025 00:44:53 +0000 (0:00:00.717) 0:03:06.948 *** 2025-09-17 00:52:42.990667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.990673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.990680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.990686 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.990693 | orchestrator | 2025-09-17 00:52:42.990700 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-17 00:52:42.990706 | orchestrator | Wednesday 17 September 2025 00:44:53 +0000 (0:00:00.361) 0:03:07.310 *** 2025-09-17 00:52:42.990713 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.990719 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.990726 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.990732 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.990739 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.990745 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.990752 | orchestrator | 2025-09-17 00:52:42.990759 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-17 00:52:42.990765 | orchestrator | Wednesday 17 September 2025 00:44:54 +0000 (0:00:00.602) 0:03:07.913 *** 2025-09-17 00:52:42.990772 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-17 00:52:42.990778 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-17 00:52:42.990785 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-17 00:52:42.990792 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-17 00:52:42.990798 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.990805 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-17 00:52:42.990811 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.990818 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-17 00:52:42.990824 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.990831 | orchestrator | 2025-09-17 00:52:42.990837 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-17 00:52:42.990844 | orchestrator | Wednesday 17 September 2025 00:44:56 +0000 (0:00:01.967) 0:03:09.880 *** 2025-09-17 00:52:42.990851 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.990857 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.990864 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.990870 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.990877 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.990883 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.990890 | orchestrator | 2025-09-17 00:52:42.990896 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 00:52:42.990918 | orchestrator | Wednesday 17 September 2025 00:44:58 +0000 (0:00:02.800) 0:03:12.681 *** 2025-09-17 00:52:42.990925 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.990932 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.990938 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.990945 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.990951 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.990958 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.990964 | orchestrator | 2025-09-17 00:52:42.990971 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-17 00:52:42.990977 | orchestrator | Wednesday 17 September 2025 00:45:00 +0000 (0:00:01.535) 0:03:14.216 *** 2025-09-17 00:52:42.990984 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.990990 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.990997 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.991004 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.991018 | orchestrator | 2025-09-17 00:52:42.991025 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-17 00:52:42.991031 | orchestrator | Wednesday 17 September 2025 00:45:01 +0000 (0:00:01.044) 0:03:15.260 *** 2025-09-17 00:52:42.991038 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.991045 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.991051 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.991058 | orchestrator | 2025-09-17 00:52:42.991069 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-17 00:52:42.991076 | orchestrator | Wednesday 17 September 2025 00:45:01 +0000 (0:00:00.295) 0:03:15.555 *** 2025-09-17 00:52:42.991082 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.991089 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.991095 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.991102 | orchestrator | 2025-09-17 00:52:42.991109 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-17 00:52:42.991115 | orchestrator | Wednesday 17 September 2025 00:45:03 +0000 (0:00:01.316) 0:03:16.872 *** 2025-09-17 00:52:42.991122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 00:52:42.991129 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 00:52:42.991135 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 00:52:42.991142 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.991148 | orchestrator | 2025-09-17 00:52:42.991155 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-17 00:52:42.991161 | orchestrator | Wednesday 17 September 2025 00:45:03 +0000 (0:00:00.597) 0:03:17.469 *** 2025-09-17 00:52:42.991168 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.991175 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.991181 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.991188 | orchestrator | 2025-09-17 00:52:42.991194 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-17 00:52:42.991201 | orchestrator | Wednesday 17 September 2025 00:45:04 +0000 (0:00:00.407) 0:03:17.877 *** 2025-09-17 00:52:42.991211 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.991218 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.991224 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.991231 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-4, testbed-node-3, testbed-node-5 2025-09-17 00:52:42.991238 | orchestrator | 2025-09-17 00:52:42.991244 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-17 00:52:42.991251 | orchestrator | Wednesday 17 September 2025 00:45:04 +0000 (0:00:00.827) 0:03:18.705 *** 2025-09-17 00:52:42.991257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.991264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.991271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.991277 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991284 | orchestrator | 2025-09-17 00:52:42.991290 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-17 00:52:42.991297 | orchestrator | Wednesday 17 September 2025 00:45:05 +0000 (0:00:00.444) 0:03:19.149 *** 2025-09-17 00:52:42.991304 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991310 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.991317 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.991323 | orchestrator | 2025-09-17 00:52:42.991330 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-17 00:52:42.991337 | orchestrator | Wednesday 17 September 2025 00:45:05 +0000 (0:00:00.539) 0:03:19.689 *** 2025-09-17 00:52:42.991343 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991350 | orchestrator | 2025-09-17 00:52:42.991356 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-17 00:52:42.991363 | orchestrator | Wednesday 17 September 2025 00:45:06 +0000 (0:00:00.235) 0:03:19.925 *** 2025-09-17 00:52:42.991374 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991381 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.991387 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.991394 | orchestrator | 2025-09-17 00:52:42.991400 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-17 00:52:42.991407 | orchestrator | Wednesday 17 September 2025 00:45:06 +0000 (0:00:00.293) 0:03:20.218 *** 2025-09-17 00:52:42.991414 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991420 | orchestrator | 2025-09-17 00:52:42.991427 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-17 00:52:42.991433 | orchestrator | Wednesday 17 September 2025 00:45:06 +0000 (0:00:00.202) 0:03:20.421 *** 2025-09-17 00:52:42.991440 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991447 | orchestrator | 2025-09-17 00:52:42.991453 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-17 00:52:42.991460 | orchestrator | Wednesday 17 September 2025 00:45:06 +0000 (0:00:00.196) 0:03:20.617 *** 2025-09-17 00:52:42.991467 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991473 | orchestrator | 2025-09-17 00:52:42.991480 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-17 00:52:42.991486 | orchestrator | Wednesday 17 September 2025 00:45:06 +0000 (0:00:00.142) 0:03:20.760 *** 2025-09-17 00:52:42.991493 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991499 | orchestrator | 2025-09-17 00:52:42.991506 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-17 00:52:42.991512 | orchestrator | Wednesday 17 September 2025 00:45:07 +0000 (0:00:00.168) 0:03:20.929 *** 2025-09-17 00:52:42.991519 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991526 | orchestrator | 2025-09-17 00:52:42.991532 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-17 00:52:42.991539 | orchestrator | Wednesday 17 September 2025 00:45:07 +0000 (0:00:00.274) 0:03:21.204 *** 2025-09-17 00:52:42.991545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.991552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.991558 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.991565 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991571 | orchestrator | 2025-09-17 00:52:42.991578 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-17 00:52:42.991585 | orchestrator | Wednesday 17 September 2025 00:45:07 +0000 (0:00:00.522) 0:03:21.727 *** 2025-09-17 00:52:42.991591 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991602 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.991609 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.991615 | orchestrator | 2025-09-17 00:52:42.991622 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-17 00:52:42.991628 | orchestrator | Wednesday 17 September 2025 00:45:08 +0000 (0:00:00.455) 0:03:22.182 *** 2025-09-17 00:52:42.991635 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991642 | orchestrator | 2025-09-17 00:52:42.991648 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-17 00:52:42.991655 | orchestrator | Wednesday 17 September 2025 00:45:08 +0000 (0:00:00.179) 0:03:22.362 *** 2025-09-17 00:52:42.991662 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991668 | orchestrator | 2025-09-17 00:52:42.991675 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-17 00:52:42.991682 | orchestrator | Wednesday 17 September 2025 00:45:08 +0000 (0:00:00.188) 0:03:22.550 *** 2025-09-17 00:52:42.991688 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.991695 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.991701 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.991708 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.991719 | orchestrator | 2025-09-17 00:52:42.991726 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-17 00:52:42.991732 | orchestrator | Wednesday 17 September 2025 00:45:09 +0000 (0:00:00.954) 0:03:23.505 *** 2025-09-17 00:52:42.991739 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.991745 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.991755 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.991762 | orchestrator | 2025-09-17 00:52:42.991769 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-17 00:52:42.991775 | orchestrator | Wednesday 17 September 2025 00:45:10 +0000 (0:00:00.395) 0:03:23.900 *** 2025-09-17 00:52:42.991782 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.991788 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.991795 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.991802 | orchestrator | 2025-09-17 00:52:42.991808 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-17 00:52:42.991815 | orchestrator | Wednesday 17 September 2025 00:45:11 +0000 (0:00:01.252) 0:03:25.153 *** 2025-09-17 00:52:42.991821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.991828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.991834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.991841 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.991847 | orchestrator | 2025-09-17 00:52:42.991854 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-17 00:52:42.991861 | orchestrator | Wednesday 17 September 2025 00:45:11 +0000 (0:00:00.638) 0:03:25.792 *** 2025-09-17 00:52:42.991867 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.991874 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.991880 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.991887 | orchestrator | 2025-09-17 00:52:42.991894 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-17 00:52:42.991900 | orchestrator | Wednesday 17 September 2025 00:45:12 +0000 (0:00:00.297) 0:03:26.090 *** 2025-09-17 00:52:42.991920 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.991927 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.991934 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.991940 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.991947 | orchestrator | 2025-09-17 00:52:42.991954 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-17 00:52:42.991960 | orchestrator | Wednesday 17 September 2025 00:45:13 +0000 (0:00:00.890) 0:03:26.980 *** 2025-09-17 00:52:42.991967 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.991973 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.991980 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.991987 | orchestrator | 2025-09-17 00:52:42.991993 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-17 00:52:42.992000 | orchestrator | Wednesday 17 September 2025 00:45:13 +0000 (0:00:00.312) 0:03:27.293 *** 2025-09-17 00:52:42.992006 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.992013 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.992020 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.992026 | orchestrator | 2025-09-17 00:52:42.992033 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-17 00:52:42.992040 | orchestrator | Wednesday 17 September 2025 00:45:15 +0000 (0:00:01.601) 0:03:28.894 *** 2025-09-17 00:52:42.992046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.992053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.992059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.992066 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.992072 | orchestrator | 2025-09-17 00:52:42.992079 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-17 00:52:42.992090 | orchestrator | Wednesday 17 September 2025 00:45:15 +0000 (0:00:00.635) 0:03:29.530 *** 2025-09-17 00:52:42.992097 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.992103 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.992110 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.992117 | orchestrator | 2025-09-17 00:52:42.992123 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-17 00:52:42.992130 | orchestrator | Wednesday 17 September 2025 00:45:16 +0000 (0:00:00.382) 0:03:29.913 *** 2025-09-17 00:52:42.992136 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.992143 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.992149 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.992156 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.992162 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.992169 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.992175 | orchestrator | 2025-09-17 00:52:42.992182 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-17 00:52:42.992193 | orchestrator | Wednesday 17 September 2025 00:45:16 +0000 (0:00:00.751) 0:03:30.664 *** 2025-09-17 00:52:42.992200 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.992207 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.992213 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.992220 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.992226 | orchestrator | 2025-09-17 00:52:42.992233 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-17 00:52:42.992240 | orchestrator | Wednesday 17 September 2025 00:45:18 +0000 (0:00:01.469) 0:03:32.133 *** 2025-09-17 00:52:42.992246 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.992253 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.992259 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.992266 | orchestrator | 2025-09-17 00:52:42.992273 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-17 00:52:42.992279 | orchestrator | Wednesday 17 September 2025 00:45:18 +0000 (0:00:00.350) 0:03:32.484 *** 2025-09-17 00:52:42.992286 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.992292 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.992299 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.992306 | orchestrator | 2025-09-17 00:52:42.992312 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-17 00:52:42.992319 | orchestrator | Wednesday 17 September 2025 00:45:20 +0000 (0:00:01.694) 0:03:34.178 *** 2025-09-17 00:52:42.992325 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 00:52:42.992335 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 00:52:42.992342 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 00:52:42.992349 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.992355 | orchestrator | 2025-09-17 00:52:42.992362 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-17 00:52:42.992368 | orchestrator | Wednesday 17 September 2025 00:45:20 +0000 (0:00:00.528) 0:03:34.707 *** 2025-09-17 00:52:42.992375 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.992382 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.992388 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.992395 | orchestrator | 2025-09-17 00:52:42.992401 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-17 00:52:42.992408 | orchestrator | 2025-09-17 00:52:42.992414 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 00:52:42.992421 | orchestrator | Wednesday 17 September 2025 00:45:21 +0000 (0:00:00.581) 0:03:35.288 *** 2025-09-17 00:52:42.992428 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.992434 | orchestrator | 2025-09-17 00:52:42.992445 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 00:52:42.992452 | orchestrator | Wednesday 17 September 2025 00:45:22 +0000 (0:00:00.972) 0:03:36.261 *** 2025-09-17 00:52:42.992459 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.992465 | orchestrator | 2025-09-17 00:52:42.992472 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 00:52:42.992479 | orchestrator | Wednesday 17 September 2025 00:45:22 +0000 (0:00:00.469) 0:03:36.731 *** 2025-09-17 00:52:42.992485 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.992492 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.992498 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.992505 | orchestrator | 2025-09-17 00:52:42.992511 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 00:52:42.992518 | orchestrator | Wednesday 17 September 2025 00:45:23 +0000 (0:00:00.840) 0:03:37.572 *** 2025-09-17 00:52:42.992525 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.992531 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.992538 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.992544 | orchestrator | 2025-09-17 00:52:42.992551 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 00:52:42.992558 | orchestrator | Wednesday 17 September 2025 00:45:24 +0000 (0:00:00.488) 0:03:38.060 *** 2025-09-17 00:52:42.992564 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.992571 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.992577 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.992584 | orchestrator | 2025-09-17 00:52:42.992590 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 00:52:42.992597 | orchestrator | Wednesday 17 September 2025 00:45:24 +0000 (0:00:00.347) 0:03:38.408 *** 2025-09-17 00:52:42.992604 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.992610 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.992617 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.992623 | orchestrator | 2025-09-17 00:52:42.992630 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 00:52:42.992637 | orchestrator | Wednesday 17 September 2025 00:45:24 +0000 (0:00:00.338) 0:03:38.747 *** 2025-09-17 00:52:42.992643 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.992650 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.992656 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.992663 | orchestrator | 2025-09-17 00:52:42.992670 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 00:52:42.992676 | orchestrator | Wednesday 17 September 2025 00:45:25 +0000 (0:00:00.992) 0:03:39.740 *** 2025-09-17 00:52:42.992683 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.992689 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.992696 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.992702 | orchestrator | 2025-09-17 00:52:42.992709 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 00:52:42.992715 | orchestrator | Wednesday 17 September 2025 00:45:26 +0000 (0:00:00.421) 0:03:40.161 *** 2025-09-17 00:52:42.992722 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.992729 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.992735 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.992742 | orchestrator | 2025-09-17 00:52:42.992752 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 00:52:42.992759 | orchestrator | Wednesday 17 September 2025 00:45:26 +0000 (0:00:00.358) 0:03:40.519 *** 2025-09-17 00:52:42.992766 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.992772 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.992779 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.992785 | orchestrator | 2025-09-17 00:52:42.992792 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 00:52:42.992799 | orchestrator | Wednesday 17 September 2025 00:45:27 +0000 (0:00:00.723) 0:03:41.243 *** 2025-09-17 00:52:42.992810 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.992817 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.992824 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.992830 | orchestrator | 2025-09-17 00:52:42.992837 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 00:52:42.992843 | orchestrator | Wednesday 17 September 2025 00:45:28 +0000 (0:00:00.703) 0:03:41.946 *** 2025-09-17 00:52:42.992850 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.992857 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.992863 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.992870 | orchestrator | 2025-09-17 00:52:42.992876 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 00:52:42.992883 | orchestrator | Wednesday 17 September 2025 00:45:28 +0000 (0:00:00.408) 0:03:42.355 *** 2025-09-17 00:52:42.992890 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.992896 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.992935 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.992943 | orchestrator | 2025-09-17 00:52:42.992954 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 00:52:42.992960 | orchestrator | Wednesday 17 September 2025 00:45:28 +0000 (0:00:00.281) 0:03:42.636 *** 2025-09-17 00:52:42.992967 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.992974 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.992980 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.992987 | orchestrator | 2025-09-17 00:52:42.992993 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 00:52:42.993000 | orchestrator | Wednesday 17 September 2025 00:45:29 +0000 (0:00:00.352) 0:03:42.989 *** 2025-09-17 00:52:42.993006 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.993013 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.993020 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.993026 | orchestrator | 2025-09-17 00:52:42.993033 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 00:52:42.993039 | orchestrator | Wednesday 17 September 2025 00:45:29 +0000 (0:00:00.274) 0:03:43.263 *** 2025-09-17 00:52:42.993046 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.993053 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.993059 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.993066 | orchestrator | 2025-09-17 00:52:42.993072 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 00:52:42.993079 | orchestrator | Wednesday 17 September 2025 00:45:29 +0000 (0:00:00.287) 0:03:43.550 *** 2025-09-17 00:52:42.993086 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.993092 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.993099 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.993105 | orchestrator | 2025-09-17 00:52:42.993112 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 00:52:42.993118 | orchestrator | Wednesday 17 September 2025 00:45:30 +0000 (0:00:00.439) 0:03:43.989 *** 2025-09-17 00:52:42.993125 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.993132 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.993138 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.993145 | orchestrator | 2025-09-17 00:52:42.993151 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 00:52:42.993158 | orchestrator | Wednesday 17 September 2025 00:45:30 +0000 (0:00:00.373) 0:03:44.363 *** 2025-09-17 00:52:42.993164 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.993171 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.993178 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.993184 | orchestrator | 2025-09-17 00:52:42.993191 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 00:52:42.993197 | orchestrator | Wednesday 17 September 2025 00:45:30 +0000 (0:00:00.290) 0:03:44.653 *** 2025-09-17 00:52:42.993209 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.993216 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.993222 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.993228 | orchestrator | 2025-09-17 00:52:42.993234 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 00:52:42.993240 | orchestrator | Wednesday 17 September 2025 00:45:31 +0000 (0:00:00.278) 0:03:44.932 *** 2025-09-17 00:52:42.993246 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.993252 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.993258 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.993264 | orchestrator | 2025-09-17 00:52:42.993270 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-17 00:52:42.993276 | orchestrator | Wednesday 17 September 2025 00:45:31 +0000 (0:00:00.705) 0:03:45.637 *** 2025-09-17 00:52:42.993283 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.993289 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.993294 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.993301 | orchestrator | 2025-09-17 00:52:42.993307 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-17 00:52:42.993313 | orchestrator | Wednesday 17 September 2025 00:45:32 +0000 (0:00:00.516) 0:03:46.153 *** 2025-09-17 00:52:42.993319 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.993325 | orchestrator | 2025-09-17 00:52:42.993331 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-17 00:52:42.993337 | orchestrator | Wednesday 17 September 2025 00:45:32 +0000 (0:00:00.667) 0:03:46.821 *** 2025-09-17 00:52:42.993343 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.993350 | orchestrator | 2025-09-17 00:52:42.993356 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-17 00:52:42.993366 | orchestrator | Wednesday 17 September 2025 00:45:33 +0000 (0:00:00.112) 0:03:46.934 *** 2025-09-17 00:52:42.993372 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-17 00:52:42.993379 | orchestrator | 2025-09-17 00:52:42.993385 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-17 00:52:42.993391 | orchestrator | Wednesday 17 September 2025 00:45:34 +0000 (0:00:00.940) 0:03:47.874 *** 2025-09-17 00:52:42.993397 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.993403 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.993409 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.993415 | orchestrator | 2025-09-17 00:52:42.993422 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-17 00:52:42.993428 | orchestrator | Wednesday 17 September 2025 00:45:34 +0000 (0:00:00.523) 0:03:48.398 *** 2025-09-17 00:52:42.993434 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.993440 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.993446 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.993452 | orchestrator | 2025-09-17 00:52:42.993458 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-17 00:52:42.993464 | orchestrator | Wednesday 17 September 2025 00:45:34 +0000 (0:00:00.366) 0:03:48.765 *** 2025-09-17 00:52:42.993471 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.993477 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.993483 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.993489 | orchestrator | 2025-09-17 00:52:42.993495 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-17 00:52:42.993501 | orchestrator | Wednesday 17 September 2025 00:45:36 +0000 (0:00:01.228) 0:03:49.993 *** 2025-09-17 00:52:42.993507 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.993513 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.993523 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.993529 | orchestrator | 2025-09-17 00:52:42.993535 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-17 00:52:42.993541 | orchestrator | Wednesday 17 September 2025 00:45:37 +0000 (0:00:01.067) 0:03:51.060 *** 2025-09-17 00:52:42.993551 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.993557 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.993563 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.993570 | orchestrator | 2025-09-17 00:52:42.993576 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-17 00:52:42.993582 | orchestrator | Wednesday 17 September 2025 00:45:37 +0000 (0:00:00.734) 0:03:51.795 *** 2025-09-17 00:52:42.993588 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.993594 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.993600 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.993606 | orchestrator | 2025-09-17 00:52:42.993612 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-17 00:52:42.993618 | orchestrator | Wednesday 17 September 2025 00:45:38 +0000 (0:00:00.681) 0:03:52.476 *** 2025-09-17 00:52:42.993624 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.993631 | orchestrator | 2025-09-17 00:52:42.993637 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-17 00:52:42.993643 | orchestrator | Wednesday 17 September 2025 00:45:39 +0000 (0:00:01.227) 0:03:53.703 *** 2025-09-17 00:52:42.993649 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.993655 | orchestrator | 2025-09-17 00:52:42.993661 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-17 00:52:42.993667 | orchestrator | Wednesday 17 September 2025 00:45:40 +0000 (0:00:00.663) 0:03:54.367 *** 2025-09-17 00:52:42.993673 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 00:52:42.993679 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:42.993685 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:42.993691 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 00:52:42.993697 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-17 00:52:42.993703 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 00:52:42.993709 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 00:52:42.993716 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 00:52:42.993722 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-17 00:52:42.993728 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-09-17 00:52:42.993734 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-17 00:52:42.993740 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-17 00:52:42.993746 | orchestrator | 2025-09-17 00:52:42.993752 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-17 00:52:42.993758 | orchestrator | Wednesday 17 September 2025 00:45:43 +0000 (0:00:03.392) 0:03:57.760 *** 2025-09-17 00:52:42.993765 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.993771 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.993777 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.993783 | orchestrator | 2025-09-17 00:52:42.993789 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-17 00:52:42.993795 | orchestrator | Wednesday 17 September 2025 00:45:45 +0000 (0:00:01.173) 0:03:58.934 *** 2025-09-17 00:52:42.993801 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.993807 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.993813 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.993819 | orchestrator | 2025-09-17 00:52:42.993825 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-17 00:52:42.993832 | orchestrator | Wednesday 17 September 2025 00:45:45 +0000 (0:00:00.334) 0:03:59.268 *** 2025-09-17 00:52:42.993838 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.993844 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.993850 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.993856 | orchestrator | 2025-09-17 00:52:42.993862 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-17 00:52:42.993872 | orchestrator | Wednesday 17 September 2025 00:45:45 +0000 (0:00:00.294) 0:03:59.563 *** 2025-09-17 00:52:42.993879 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.993885 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.993891 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.993897 | orchestrator | 2025-09-17 00:52:42.993923 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-17 00:52:42.993930 | orchestrator | Wednesday 17 September 2025 00:45:47 +0000 (0:00:01.798) 0:04:01.361 *** 2025-09-17 00:52:42.993936 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.993943 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.993949 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.993955 | orchestrator | 2025-09-17 00:52:42.993961 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-17 00:52:42.993967 | orchestrator | Wednesday 17 September 2025 00:45:48 +0000 (0:00:01.212) 0:04:02.573 *** 2025-09-17 00:52:42.993973 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.993979 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.993985 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.993991 | orchestrator | 2025-09-17 00:52:42.993997 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-17 00:52:42.994004 | orchestrator | Wednesday 17 September 2025 00:45:48 +0000 (0:00:00.287) 0:04:02.861 *** 2025-09-17 00:52:42.994010 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.994083 | orchestrator | 2025-09-17 00:52:42.994093 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-17 00:52:42.994099 | orchestrator | Wednesday 17 September 2025 00:45:49 +0000 (0:00:00.466) 0:04:03.327 *** 2025-09-17 00:52:42.994105 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.994111 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.994117 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.994123 | orchestrator | 2025-09-17 00:52:42.994132 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-17 00:52:42.994139 | orchestrator | Wednesday 17 September 2025 00:45:49 +0000 (0:00:00.404) 0:04:03.732 *** 2025-09-17 00:52:42.994145 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.994151 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.994157 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.994163 | orchestrator | 2025-09-17 00:52:42.994169 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-17 00:52:42.994175 | orchestrator | Wednesday 17 September 2025 00:45:50 +0000 (0:00:00.253) 0:04:03.985 *** 2025-09-17 00:52:42.994181 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.994187 | orchestrator | 2025-09-17 00:52:42.994193 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-17 00:52:42.994199 | orchestrator | Wednesday 17 September 2025 00:45:50 +0000 (0:00:00.472) 0:04:04.457 *** 2025-09-17 00:52:42.994205 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.994212 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.994217 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.994224 | orchestrator | 2025-09-17 00:52:42.994230 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-17 00:52:42.994236 | orchestrator | Wednesday 17 September 2025 00:45:53 +0000 (0:00:02.413) 0:04:06.871 *** 2025-09-17 00:52:42.994242 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.994248 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.994254 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.994260 | orchestrator | 2025-09-17 00:52:42.994266 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-17 00:52:42.994272 | orchestrator | Wednesday 17 September 2025 00:45:54 +0000 (0:00:01.180) 0:04:08.051 *** 2025-09-17 00:52:42.994278 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.994291 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.994297 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.994303 | orchestrator | 2025-09-17 00:52:42.994310 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-17 00:52:42.994316 | orchestrator | Wednesday 17 September 2025 00:45:55 +0000 (0:00:01.730) 0:04:09.782 *** 2025-09-17 00:52:42.994322 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.994328 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.994334 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.994340 | orchestrator | 2025-09-17 00:52:42.994346 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-17 00:52:42.994352 | orchestrator | Wednesday 17 September 2025 00:45:57 +0000 (0:00:01.861) 0:04:11.643 *** 2025-09-17 00:52:42.994358 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.994365 | orchestrator | 2025-09-17 00:52:42.994371 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-17 00:52:42.994377 | orchestrator | Wednesday 17 September 2025 00:45:58 +0000 (0:00:00.802) 0:04:12.445 *** 2025-09-17 00:52:42.994383 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-17 00:52:42.994389 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.994395 | orchestrator | 2025-09-17 00:52:42.994401 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-17 00:52:42.994407 | orchestrator | Wednesday 17 September 2025 00:46:20 +0000 (0:00:22.004) 0:04:34.450 *** 2025-09-17 00:52:42.994413 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.994419 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.994425 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.994431 | orchestrator | 2025-09-17 00:52:42.994438 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-17 00:52:42.994444 | orchestrator | Wednesday 17 September 2025 00:46:29 +0000 (0:00:09.260) 0:04:43.710 *** 2025-09-17 00:52:42.994450 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.994456 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.994462 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.994468 | orchestrator | 2025-09-17 00:52:42.994474 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-17 00:52:42.994480 | orchestrator | Wednesday 17 September 2025 00:46:30 +0000 (0:00:00.301) 0:04:44.012 *** 2025-09-17 00:52:42.994509 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f08e63a6eb3a43dc25b27c4385b7643fb064fcc7'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-17 00:52:42.994519 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f08e63a6eb3a43dc25b27c4385b7643fb064fcc7'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-17 00:52:42.994526 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f08e63a6eb3a43dc25b27c4385b7643fb064fcc7'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-17 00:52:42.994537 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f08e63a6eb3a43dc25b27c4385b7643fb064fcc7'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-17 00:52:42.994602 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f08e63a6eb3a43dc25b27c4385b7643fb064fcc7'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-17 00:52:42.994610 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f08e63a6eb3a43dc25b27c4385b7643fb064fcc7'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f08e63a6eb3a43dc25b27c4385b7643fb064fcc7'}])  2025-09-17 00:52:42.994617 | orchestrator | 2025-09-17 00:52:42.994623 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 00:52:42.994629 | orchestrator | Wednesday 17 September 2025 00:46:46 +0000 (0:00:15.902) 0:04:59.914 *** 2025-09-17 00:52:42.994635 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.994641 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.994647 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.994653 | orchestrator | 2025-09-17 00:52:42.994660 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-17 00:52:42.994666 | orchestrator | Wednesday 17 September 2025 00:46:46 +0000 (0:00:00.440) 0:05:00.355 *** 2025-09-17 00:52:42.994672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.994678 | orchestrator | 2025-09-17 00:52:42.994684 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-17 00:52:42.994690 | orchestrator | Wednesday 17 September 2025 00:46:47 +0000 (0:00:00.718) 0:05:01.073 *** 2025-09-17 00:52:42.994696 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.994702 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.994708 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.994714 | orchestrator | 2025-09-17 00:52:42.994720 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-17 00:52:42.994726 | orchestrator | Wednesday 17 September 2025 00:46:47 +0000 (0:00:00.381) 0:05:01.454 *** 2025-09-17 00:52:42.994732 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.994738 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.994744 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.994750 | orchestrator | 2025-09-17 00:52:42.994756 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-17 00:52:42.994762 | orchestrator | Wednesday 17 September 2025 00:46:47 +0000 (0:00:00.396) 0:05:01.851 *** 2025-09-17 00:52:42.994768 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 00:52:42.994774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 00:52:42.994780 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 00:52:42.994786 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.994792 | orchestrator | 2025-09-17 00:52:42.994798 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-17 00:52:42.994804 | orchestrator | Wednesday 17 September 2025 00:46:49 +0000 (0:00:01.031) 0:05:02.883 *** 2025-09-17 00:52:42.994810 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.994816 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.994822 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.994828 | orchestrator | 2025-09-17 00:52:42.994854 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-17 00:52:42.994862 | orchestrator | 2025-09-17 00:52:42.994868 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 00:52:42.994874 | orchestrator | Wednesday 17 September 2025 00:46:49 +0000 (0:00:00.819) 0:05:03.702 *** 2025-09-17 00:52:42.994885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.994892 | orchestrator | 2025-09-17 00:52:42.994898 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 00:52:42.994939 | orchestrator | Wednesday 17 September 2025 00:46:50 +0000 (0:00:00.496) 0:05:04.198 *** 2025-09-17 00:52:42.994946 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.994953 | orchestrator | 2025-09-17 00:52:42.994959 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 00:52:42.994965 | orchestrator | Wednesday 17 September 2025 00:46:51 +0000 (0:00:00.805) 0:05:05.004 *** 2025-09-17 00:52:42.994971 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.994977 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.994983 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.994989 | orchestrator | 2025-09-17 00:52:42.994996 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 00:52:42.995002 | orchestrator | Wednesday 17 September 2025 00:46:51 +0000 (0:00:00.724) 0:05:05.728 *** 2025-09-17 00:52:42.995008 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995014 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995023 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995030 | orchestrator | 2025-09-17 00:52:42.995036 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 00:52:42.995042 | orchestrator | Wednesday 17 September 2025 00:46:52 +0000 (0:00:00.317) 0:05:06.045 *** 2025-09-17 00:52:42.995047 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995052 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995058 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995063 | orchestrator | 2025-09-17 00:52:42.995068 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 00:52:42.995074 | orchestrator | Wednesday 17 September 2025 00:46:52 +0000 (0:00:00.329) 0:05:06.375 *** 2025-09-17 00:52:42.995079 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995084 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995089 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995095 | orchestrator | 2025-09-17 00:52:42.995100 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 00:52:42.995105 | orchestrator | Wednesday 17 September 2025 00:46:53 +0000 (0:00:00.545) 0:05:06.921 *** 2025-09-17 00:52:42.995111 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.995116 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.995121 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.995127 | orchestrator | 2025-09-17 00:52:42.995132 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 00:52:42.995137 | orchestrator | Wednesday 17 September 2025 00:46:53 +0000 (0:00:00.761) 0:05:07.683 *** 2025-09-17 00:52:42.995143 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995148 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995153 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995158 | orchestrator | 2025-09-17 00:52:42.995164 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 00:52:42.995169 | orchestrator | Wednesday 17 September 2025 00:46:54 +0000 (0:00:00.323) 0:05:08.007 *** 2025-09-17 00:52:42.995174 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995179 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995185 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995190 | orchestrator | 2025-09-17 00:52:42.995195 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 00:52:42.995201 | orchestrator | Wednesday 17 September 2025 00:46:54 +0000 (0:00:00.337) 0:05:08.345 *** 2025-09-17 00:52:42.995206 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.995211 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.995222 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.995227 | orchestrator | 2025-09-17 00:52:42.995233 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 00:52:42.995238 | orchestrator | Wednesday 17 September 2025 00:46:55 +0000 (0:00:01.001) 0:05:09.346 *** 2025-09-17 00:52:42.995243 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.995249 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.995254 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.995260 | orchestrator | 2025-09-17 00:52:42.995265 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 00:52:42.995270 | orchestrator | Wednesday 17 September 2025 00:46:56 +0000 (0:00:00.822) 0:05:10.169 *** 2025-09-17 00:52:42.995276 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995281 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995287 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995292 | orchestrator | 2025-09-17 00:52:42.995297 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 00:52:42.995303 | orchestrator | Wednesday 17 September 2025 00:46:56 +0000 (0:00:00.322) 0:05:10.491 *** 2025-09-17 00:52:42.995308 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.995313 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.995319 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.995324 | orchestrator | 2025-09-17 00:52:42.995330 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 00:52:42.995335 | orchestrator | Wednesday 17 September 2025 00:46:56 +0000 (0:00:00.343) 0:05:10.834 *** 2025-09-17 00:52:42.995340 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995346 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995351 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995356 | orchestrator | 2025-09-17 00:52:42.995362 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 00:52:42.995367 | orchestrator | Wednesday 17 September 2025 00:46:57 +0000 (0:00:00.293) 0:05:11.128 *** 2025-09-17 00:52:42.995373 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995378 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995401 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995408 | orchestrator | 2025-09-17 00:52:42.995413 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 00:52:42.995418 | orchestrator | Wednesday 17 September 2025 00:46:57 +0000 (0:00:00.497) 0:05:11.625 *** 2025-09-17 00:52:42.995424 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995429 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995434 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995439 | orchestrator | 2025-09-17 00:52:42.995445 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 00:52:42.995450 | orchestrator | Wednesday 17 September 2025 00:46:58 +0000 (0:00:00.305) 0:05:11.931 *** 2025-09-17 00:52:42.995455 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995461 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995466 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995471 | orchestrator | 2025-09-17 00:52:42.995477 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 00:52:42.995482 | orchestrator | Wednesday 17 September 2025 00:46:58 +0000 (0:00:00.355) 0:05:12.287 *** 2025-09-17 00:52:42.995487 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995493 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995498 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995503 | orchestrator | 2025-09-17 00:52:42.995509 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 00:52:42.995514 | orchestrator | Wednesday 17 September 2025 00:46:58 +0000 (0:00:00.293) 0:05:12.580 *** 2025-09-17 00:52:42.995519 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.995525 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.995533 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.995546 | orchestrator | 2025-09-17 00:52:42.995552 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 00:52:42.995557 | orchestrator | Wednesday 17 September 2025 00:46:59 +0000 (0:00:00.584) 0:05:13.165 *** 2025-09-17 00:52:42.995562 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.995568 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.995573 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.995578 | orchestrator | 2025-09-17 00:52:42.995584 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 00:52:42.995589 | orchestrator | Wednesday 17 September 2025 00:46:59 +0000 (0:00:00.373) 0:05:13.539 *** 2025-09-17 00:52:42.995594 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.995600 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.995605 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.995610 | orchestrator | 2025-09-17 00:52:42.995616 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-17 00:52:42.995621 | orchestrator | Wednesday 17 September 2025 00:47:00 +0000 (0:00:00.525) 0:05:14.065 *** 2025-09-17 00:52:42.995626 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 00:52:42.995632 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 00:52:42.995637 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 00:52:42.995642 | orchestrator | 2025-09-17 00:52:42.995648 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-17 00:52:42.995653 | orchestrator | Wednesday 17 September 2025 00:47:01 +0000 (0:00:00.980) 0:05:15.045 *** 2025-09-17 00:52:42.995658 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.995664 | orchestrator | 2025-09-17 00:52:42.995669 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-17 00:52:42.995674 | orchestrator | Wednesday 17 September 2025 00:47:02 +0000 (0:00:00.977) 0:05:16.023 *** 2025-09-17 00:52:42.995680 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.995685 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.995690 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.995695 | orchestrator | 2025-09-17 00:52:42.995701 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-17 00:52:42.995706 | orchestrator | Wednesday 17 September 2025 00:47:02 +0000 (0:00:00.756) 0:05:16.779 *** 2025-09-17 00:52:42.995711 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995717 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995722 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995727 | orchestrator | 2025-09-17 00:52:42.995732 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-17 00:52:42.995738 | orchestrator | Wednesday 17 September 2025 00:47:03 +0000 (0:00:00.330) 0:05:17.110 *** 2025-09-17 00:52:42.995743 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 00:52:42.995748 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 00:52:42.995754 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 00:52:42.995759 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-17 00:52:42.995764 | orchestrator | 2025-09-17 00:52:42.995770 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-17 00:52:42.995775 | orchestrator | Wednesday 17 September 2025 00:47:14 +0000 (0:00:11.189) 0:05:28.300 *** 2025-09-17 00:52:42.995780 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.995786 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.995791 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.995796 | orchestrator | 2025-09-17 00:52:42.995802 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-17 00:52:42.995807 | orchestrator | Wednesday 17 September 2025 00:47:15 +0000 (0:00:00.605) 0:05:28.905 *** 2025-09-17 00:52:42.995812 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-17 00:52:42.995822 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-17 00:52:42.995828 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-17 00:52:42.995833 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:42.995838 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-17 00:52:42.995844 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:42.995849 | orchestrator | 2025-09-17 00:52:42.995869 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-17 00:52:42.995875 | orchestrator | Wednesday 17 September 2025 00:47:17 +0000 (0:00:02.478) 0:05:31.383 *** 2025-09-17 00:52:42.995881 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-17 00:52:42.995886 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-17 00:52:42.995891 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-17 00:52:42.995897 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 00:52:42.995916 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-17 00:52:42.995922 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-17 00:52:42.995928 | orchestrator | 2025-09-17 00:52:42.995933 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-17 00:52:42.995938 | orchestrator | Wednesday 17 September 2025 00:47:18 +0000 (0:00:01.276) 0:05:32.660 *** 2025-09-17 00:52:42.995944 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.995949 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.995954 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.995959 | orchestrator | 2025-09-17 00:52:42.995965 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-17 00:52:42.995970 | orchestrator | Wednesday 17 September 2025 00:47:19 +0000 (0:00:00.667) 0:05:33.327 *** 2025-09-17 00:52:42.995975 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.995981 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.995986 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.995991 | orchestrator | 2025-09-17 00:52:42.995997 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-17 00:52:42.996005 | orchestrator | Wednesday 17 September 2025 00:47:19 +0000 (0:00:00.514) 0:05:33.842 *** 2025-09-17 00:52:42.996010 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.996016 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.996021 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.996026 | orchestrator | 2025-09-17 00:52:42.996031 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-17 00:52:42.996037 | orchestrator | Wednesday 17 September 2025 00:47:20 +0000 (0:00:00.291) 0:05:34.133 *** 2025-09-17 00:52:42.996042 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.996047 | orchestrator | 2025-09-17 00:52:42.996053 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-17 00:52:42.996058 | orchestrator | Wednesday 17 September 2025 00:47:20 +0000 (0:00:00.519) 0:05:34.653 *** 2025-09-17 00:52:42.996063 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.996068 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.996073 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.996079 | orchestrator | 2025-09-17 00:52:42.996084 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-17 00:52:42.996089 | orchestrator | Wednesday 17 September 2025 00:47:21 +0000 (0:00:00.533) 0:05:35.186 *** 2025-09-17 00:52:42.996094 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.996100 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.996105 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.996110 | orchestrator | 2025-09-17 00:52:42.996115 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-17 00:52:42.996121 | orchestrator | Wednesday 17 September 2025 00:47:21 +0000 (0:00:00.339) 0:05:35.525 *** 2025-09-17 00:52:42.996131 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.996136 | orchestrator | 2025-09-17 00:52:42.996141 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-17 00:52:42.996147 | orchestrator | Wednesday 17 September 2025 00:47:22 +0000 (0:00:00.524) 0:05:36.050 *** 2025-09-17 00:52:42.996152 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.996157 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.996163 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.996168 | orchestrator | 2025-09-17 00:52:42.996173 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-17 00:52:42.996178 | orchestrator | Wednesday 17 September 2025 00:47:23 +0000 (0:00:01.424) 0:05:37.474 *** 2025-09-17 00:52:42.996184 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.996189 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.996194 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.996199 | orchestrator | 2025-09-17 00:52:42.996205 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-17 00:52:42.996210 | orchestrator | Wednesday 17 September 2025 00:47:24 +0000 (0:00:01.212) 0:05:38.686 *** 2025-09-17 00:52:42.996215 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.996221 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.996226 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.996231 | orchestrator | 2025-09-17 00:52:42.996236 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-17 00:52:42.996241 | orchestrator | Wednesday 17 September 2025 00:47:27 +0000 (0:00:02.650) 0:05:41.337 *** 2025-09-17 00:52:42.996247 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.996252 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.996257 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.996262 | orchestrator | 2025-09-17 00:52:42.996268 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-17 00:52:42.996273 | orchestrator | Wednesday 17 September 2025 00:47:29 +0000 (0:00:01.942) 0:05:43.279 *** 2025-09-17 00:52:42.996278 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.996283 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.996289 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-17 00:52:42.996294 | orchestrator | 2025-09-17 00:52:42.996299 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-17 00:52:42.996304 | orchestrator | Wednesday 17 September 2025 00:47:30 +0000 (0:00:00.669) 0:05:43.949 *** 2025-09-17 00:52:42.996310 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-17 00:52:42.996331 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-17 00:52:42.996338 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-17 00:52:42.996343 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-17 00:52:42.996348 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-17 00:52:42.996354 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-09-17 00:52:42.996359 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-17 00:52:42.996364 | orchestrator | 2025-09-17 00:52:42.996370 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-17 00:52:42.996375 | orchestrator | Wednesday 17 September 2025 00:48:06 +0000 (0:00:36.442) 0:06:20.391 *** 2025-09-17 00:52:42.996380 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-17 00:52:42.996386 | orchestrator | 2025-09-17 00:52:42.996391 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-17 00:52:42.996400 | orchestrator | Wednesday 17 September 2025 00:48:07 +0000 (0:00:01.384) 0:06:21.776 *** 2025-09-17 00:52:42.996405 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.996411 | orchestrator | 2025-09-17 00:52:42.996416 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-17 00:52:42.996422 | orchestrator | Wednesday 17 September 2025 00:48:08 +0000 (0:00:00.312) 0:06:22.089 *** 2025-09-17 00:52:42.996427 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.996432 | orchestrator | 2025-09-17 00:52:42.996437 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-17 00:52:42.996443 | orchestrator | Wednesday 17 September 2025 00:48:08 +0000 (0:00:00.137) 0:06:22.226 *** 2025-09-17 00:52:42.996448 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-17 00:52:42.996453 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-17 00:52:42.996459 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-17 00:52:42.996464 | orchestrator | 2025-09-17 00:52:42.996469 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-17 00:52:42.996474 | orchestrator | Wednesday 17 September 2025 00:48:14 +0000 (0:00:06.542) 0:06:28.769 *** 2025-09-17 00:52:42.996480 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-17 00:52:42.996485 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-17 00:52:42.996490 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-17 00:52:42.996496 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-17 00:52:42.996501 | orchestrator | 2025-09-17 00:52:42.996506 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 00:52:42.996512 | orchestrator | Wednesday 17 September 2025 00:48:19 +0000 (0:00:05.014) 0:06:33.783 *** 2025-09-17 00:52:42.996517 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.996522 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.996527 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.996533 | orchestrator | 2025-09-17 00:52:42.996538 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-17 00:52:42.996543 | orchestrator | Wednesday 17 September 2025 00:48:20 +0000 (0:00:00.716) 0:06:34.500 *** 2025-09-17 00:52:42.996549 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.996554 | orchestrator | 2025-09-17 00:52:42.996559 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-17 00:52:42.996565 | orchestrator | Wednesday 17 September 2025 00:48:21 +0000 (0:00:00.513) 0:06:35.013 *** 2025-09-17 00:52:42.996570 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.996575 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.996580 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.996586 | orchestrator | 2025-09-17 00:52:42.996591 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-17 00:52:42.996596 | orchestrator | Wednesday 17 September 2025 00:48:21 +0000 (0:00:00.307) 0:06:35.321 *** 2025-09-17 00:52:42.996601 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:42.996607 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:42.996612 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:42.996617 | orchestrator | 2025-09-17 00:52:42.996623 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-17 00:52:42.996628 | orchestrator | Wednesday 17 September 2025 00:48:22 +0000 (0:00:01.470) 0:06:36.791 *** 2025-09-17 00:52:42.996633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 00:52:42.996638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 00:52:42.996644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 00:52:42.996649 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.996658 | orchestrator | 2025-09-17 00:52:42.996664 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-17 00:52:42.996669 | orchestrator | Wednesday 17 September 2025 00:48:23 +0000 (0:00:00.612) 0:06:37.403 *** 2025-09-17 00:52:42.996674 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.996680 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.996685 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.996690 | orchestrator | 2025-09-17 00:52:42.996695 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-17 00:52:42.996701 | orchestrator | 2025-09-17 00:52:42.996706 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 00:52:42.996711 | orchestrator | Wednesday 17 September 2025 00:48:24 +0000 (0:00:00.529) 0:06:37.933 *** 2025-09-17 00:52:42.996732 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.996738 | orchestrator | 2025-09-17 00:52:42.996743 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 00:52:42.996749 | orchestrator | Wednesday 17 September 2025 00:48:24 +0000 (0:00:00.738) 0:06:38.671 *** 2025-09-17 00:52:42.996754 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.996759 | orchestrator | 2025-09-17 00:52:42.996765 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 00:52:42.996770 | orchestrator | Wednesday 17 September 2025 00:48:25 +0000 (0:00:00.523) 0:06:39.194 *** 2025-09-17 00:52:42.996775 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.996780 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.996786 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.996791 | orchestrator | 2025-09-17 00:52:42.996796 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 00:52:42.996802 | orchestrator | Wednesday 17 September 2025 00:48:25 +0000 (0:00:00.510) 0:06:39.705 *** 2025-09-17 00:52:42.996807 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.996812 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.996817 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.996823 | orchestrator | 2025-09-17 00:52:42.996828 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 00:52:42.996871 | orchestrator | Wednesday 17 September 2025 00:48:26 +0000 (0:00:00.644) 0:06:40.350 *** 2025-09-17 00:52:42.996885 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.996891 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.996896 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.996917 | orchestrator | 2025-09-17 00:52:42.996923 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 00:52:42.996929 | orchestrator | Wednesday 17 September 2025 00:48:27 +0000 (0:00:00.699) 0:06:41.050 *** 2025-09-17 00:52:42.996934 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.996939 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.996945 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.996950 | orchestrator | 2025-09-17 00:52:42.996955 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 00:52:42.996961 | orchestrator | Wednesday 17 September 2025 00:48:27 +0000 (0:00:00.688) 0:06:41.738 *** 2025-09-17 00:52:42.996966 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.996971 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.996977 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.996982 | orchestrator | 2025-09-17 00:52:42.996987 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 00:52:42.996993 | orchestrator | Wednesday 17 September 2025 00:48:28 +0000 (0:00:00.524) 0:06:42.263 *** 2025-09-17 00:52:42.996998 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.997003 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.997009 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.997014 | orchestrator | 2025-09-17 00:52:42.997019 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 00:52:42.997029 | orchestrator | Wednesday 17 September 2025 00:48:28 +0000 (0:00:00.304) 0:06:42.567 *** 2025-09-17 00:52:42.997034 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.997040 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.997045 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.997050 | orchestrator | 2025-09-17 00:52:42.997056 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 00:52:42.997061 | orchestrator | Wednesday 17 September 2025 00:48:29 +0000 (0:00:00.305) 0:06:42.872 *** 2025-09-17 00:52:42.997066 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.997072 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.997077 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.997082 | orchestrator | 2025-09-17 00:52:42.997087 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 00:52:42.997093 | orchestrator | Wednesday 17 September 2025 00:48:29 +0000 (0:00:00.711) 0:06:43.584 *** 2025-09-17 00:52:42.997098 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.997103 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.997109 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.997114 | orchestrator | 2025-09-17 00:52:42.997119 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 00:52:42.997124 | orchestrator | Wednesday 17 September 2025 00:48:30 +0000 (0:00:00.935) 0:06:44.519 *** 2025-09-17 00:52:42.997130 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.997135 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.997140 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.997146 | orchestrator | 2025-09-17 00:52:42.997151 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 00:52:42.997156 | orchestrator | Wednesday 17 September 2025 00:48:30 +0000 (0:00:00.327) 0:06:44.846 *** 2025-09-17 00:52:42.997162 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.997167 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.997172 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.997178 | orchestrator | 2025-09-17 00:52:42.997183 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 00:52:42.997188 | orchestrator | Wednesday 17 September 2025 00:48:31 +0000 (0:00:00.318) 0:06:45.165 *** 2025-09-17 00:52:42.997193 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.997199 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.997204 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.997209 | orchestrator | 2025-09-17 00:52:42.997215 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 00:52:42.997220 | orchestrator | Wednesday 17 September 2025 00:48:31 +0000 (0:00:00.351) 0:06:45.516 *** 2025-09-17 00:52:42.997225 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.997230 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.997236 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.997241 | orchestrator | 2025-09-17 00:52:42.997246 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 00:52:42.997251 | orchestrator | Wednesday 17 September 2025 00:48:32 +0000 (0:00:00.538) 0:06:46.055 *** 2025-09-17 00:52:42.997257 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.997266 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.997272 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.997277 | orchestrator | 2025-09-17 00:52:42.997283 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 00:52:42.997288 | orchestrator | Wednesday 17 September 2025 00:48:32 +0000 (0:00:00.403) 0:06:46.459 *** 2025-09-17 00:52:42.997293 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.997299 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.997304 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.997309 | orchestrator | 2025-09-17 00:52:42.997315 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 00:52:42.997320 | orchestrator | Wednesday 17 September 2025 00:48:32 +0000 (0:00:00.307) 0:06:46.767 *** 2025-09-17 00:52:42.997329 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.997335 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.997340 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.997345 | orchestrator | 2025-09-17 00:52:42.997350 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 00:52:42.997356 | orchestrator | Wednesday 17 September 2025 00:48:33 +0000 (0:00:00.296) 0:06:47.063 *** 2025-09-17 00:52:42.997361 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.997366 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.997372 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.997377 | orchestrator | 2025-09-17 00:52:42.997382 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 00:52:42.997388 | orchestrator | Wednesday 17 September 2025 00:48:33 +0000 (0:00:00.283) 0:06:47.346 *** 2025-09-17 00:52:42.997393 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.997398 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.997406 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.997412 | orchestrator | 2025-09-17 00:52:42.997417 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 00:52:42.997422 | orchestrator | Wednesday 17 September 2025 00:48:34 +0000 (0:00:00.652) 0:06:47.999 *** 2025-09-17 00:52:42.997428 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.997433 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.997438 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.997444 | orchestrator | 2025-09-17 00:52:42.997449 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-17 00:52:42.997454 | orchestrator | Wednesday 17 September 2025 00:48:34 +0000 (0:00:00.509) 0:06:48.508 *** 2025-09-17 00:52:42.997460 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.997465 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.997470 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.997475 | orchestrator | 2025-09-17 00:52:42.997481 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-17 00:52:42.997486 | orchestrator | Wednesday 17 September 2025 00:48:35 +0000 (0:00:00.359) 0:06:48.868 *** 2025-09-17 00:52:42.997492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 00:52:42.997497 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 00:52:42.997502 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 00:52:42.997507 | orchestrator | 2025-09-17 00:52:42.997513 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-17 00:52:42.997518 | orchestrator | Wednesday 17 September 2025 00:48:36 +0000 (0:00:01.140) 0:06:50.008 *** 2025-09-17 00:52:42.997524 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.997529 | orchestrator | 2025-09-17 00:52:42.997534 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-17 00:52:42.997540 | orchestrator | Wednesday 17 September 2025 00:48:36 +0000 (0:00:00.515) 0:06:50.524 *** 2025-09-17 00:52:42.997545 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.997550 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.997555 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.997561 | orchestrator | 2025-09-17 00:52:42.997566 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-17 00:52:42.997571 | orchestrator | Wednesday 17 September 2025 00:48:36 +0000 (0:00:00.320) 0:06:50.845 *** 2025-09-17 00:52:42.997577 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.997582 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.997587 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.997593 | orchestrator | 2025-09-17 00:52:42.997598 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-17 00:52:42.997603 | orchestrator | Wednesday 17 September 2025 00:48:37 +0000 (0:00:00.555) 0:06:51.400 *** 2025-09-17 00:52:42.997613 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.997618 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.997624 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.997629 | orchestrator | 2025-09-17 00:52:42.997634 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-17 00:52:42.997640 | orchestrator | Wednesday 17 September 2025 00:48:38 +0000 (0:00:00.648) 0:06:52.049 *** 2025-09-17 00:52:42.997645 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.997650 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.997655 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.997661 | orchestrator | 2025-09-17 00:52:42.997666 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-17 00:52:42.997671 | orchestrator | Wednesday 17 September 2025 00:48:38 +0000 (0:00:00.335) 0:06:52.385 *** 2025-09-17 00:52:42.997677 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-17 00:52:42.997682 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-17 00:52:42.997687 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-17 00:52:42.997693 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-17 00:52:42.997702 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-17 00:52:42.997708 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-17 00:52:42.997713 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-17 00:52:42.997718 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-17 00:52:42.997724 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-17 00:52:42.997729 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-17 00:52:42.997734 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-17 00:52:42.997739 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-17 00:52:42.997745 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-17 00:52:42.997750 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-17 00:52:42.997755 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-17 00:52:42.997761 | orchestrator | 2025-09-17 00:52:42.997766 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-17 00:52:42.997771 | orchestrator | Wednesday 17 September 2025 00:48:40 +0000 (0:00:02.255) 0:06:54.641 *** 2025-09-17 00:52:42.997780 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.997785 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.997790 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.997795 | orchestrator | 2025-09-17 00:52:42.997801 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-17 00:52:42.997806 | orchestrator | Wednesday 17 September 2025 00:48:41 +0000 (0:00:00.545) 0:06:55.186 *** 2025-09-17 00:52:42.997811 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.997817 | orchestrator | 2025-09-17 00:52:42.997822 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-17 00:52:42.997827 | orchestrator | Wednesday 17 September 2025 00:48:41 +0000 (0:00:00.567) 0:06:55.753 *** 2025-09-17 00:52:42.997833 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-17 00:52:42.997838 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-17 00:52:42.997847 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-17 00:52:42.997852 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-17 00:52:42.997858 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-17 00:52:42.997863 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-17 00:52:42.997868 | orchestrator | 2025-09-17 00:52:42.997874 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-17 00:52:42.997879 | orchestrator | Wednesday 17 September 2025 00:48:42 +0000 (0:00:01.058) 0:06:56.812 *** 2025-09-17 00:52:42.997884 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:42.997890 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 00:52:42.997895 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 00:52:42.997900 | orchestrator | 2025-09-17 00:52:42.997917 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-17 00:52:42.997923 | orchestrator | Wednesday 17 September 2025 00:48:45 +0000 (0:00:02.275) 0:06:59.088 *** 2025-09-17 00:52:42.997928 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 00:52:42.997934 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 00:52:42.997939 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.997944 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 00:52:42.997950 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-17 00:52:42.997955 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.997960 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 00:52:42.997965 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-17 00:52:42.997971 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.997976 | orchestrator | 2025-09-17 00:52:42.997981 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-17 00:52:42.997987 | orchestrator | Wednesday 17 September 2025 00:48:46 +0000 (0:00:01.462) 0:07:00.550 *** 2025-09-17 00:52:42.997992 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 00:52:42.997997 | orchestrator | 2025-09-17 00:52:42.998002 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-17 00:52:42.998007 | orchestrator | Wednesday 17 September 2025 00:48:48 +0000 (0:00:02.306) 0:07:02.857 *** 2025-09-17 00:52:42.998013 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.998044 | orchestrator | 2025-09-17 00:52:42.998049 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-17 00:52:42.998055 | orchestrator | Wednesday 17 September 2025 00:48:49 +0000 (0:00:00.531) 0:07:03.389 *** 2025-09-17 00:52:42.998060 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d', 'data_vg': 'ceph-f65d6451-63aa-5ff6-99b4-c6c20cacdd2d'}) 2025-09-17 00:52:42.998066 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac', 'data_vg': 'ceph-3f2c044b-dfa5-5506-ae92-c5b86c73e5ac'}) 2025-09-17 00:52:42.998076 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2dc6576b-ad92-58b3-afc8-22b8ce20905e', 'data_vg': 'ceph-2dc6576b-ad92-58b3-afc8-22b8ce20905e'}) 2025-09-17 00:52:42.998081 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d1158166-3610-5fc1-bd8e-5288705939fa', 'data_vg': 'ceph-d1158166-3610-5fc1-bd8e-5288705939fa'}) 2025-09-17 00:52:42.998087 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15', 'data_vg': 'ceph-fe66c6e3-4f85-5e6e-b974-d8af1fb98b15'}) 2025-09-17 00:52:42.998092 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a7b5a8de-6218-5c80-971a-bac3422a4161', 'data_vg': 'ceph-a7b5a8de-6218-5c80-971a-bac3422a4161'}) 2025-09-17 00:52:42.998097 | orchestrator | 2025-09-17 00:52:42.998103 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-17 00:52:42.998112 | orchestrator | Wednesday 17 September 2025 00:49:27 +0000 (0:00:38.070) 0:07:41.459 *** 2025-09-17 00:52:42.998117 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998123 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.998128 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.998133 | orchestrator | 2025-09-17 00:52:42.998139 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-17 00:52:42.998144 | orchestrator | Wednesday 17 September 2025 00:49:28 +0000 (0:00:00.535) 0:07:41.994 *** 2025-09-17 00:52:42.998149 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.998154 | orchestrator | 2025-09-17 00:52:42.998163 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-17 00:52:42.998168 | orchestrator | Wednesday 17 September 2025 00:49:28 +0000 (0:00:00.508) 0:07:42.503 *** 2025-09-17 00:52:42.998174 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.998179 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.998184 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.998190 | orchestrator | 2025-09-17 00:52:42.998195 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-17 00:52:42.998200 | orchestrator | Wednesday 17 September 2025 00:49:29 +0000 (0:00:00.677) 0:07:43.181 *** 2025-09-17 00:52:42.998206 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.998211 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.998216 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.998221 | orchestrator | 2025-09-17 00:52:42.998227 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-17 00:52:42.998232 | orchestrator | Wednesday 17 September 2025 00:49:32 +0000 (0:00:02.974) 0:07:46.155 *** 2025-09-17 00:52:42.998237 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.998243 | orchestrator | 2025-09-17 00:52:42.998248 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-17 00:52:42.998253 | orchestrator | Wednesday 17 September 2025 00:49:32 +0000 (0:00:00.516) 0:07:46.671 *** 2025-09-17 00:52:42.998259 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.998264 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.998269 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.998275 | orchestrator | 2025-09-17 00:52:42.998280 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-17 00:52:42.998285 | orchestrator | Wednesday 17 September 2025 00:49:33 +0000 (0:00:01.175) 0:07:47.847 *** 2025-09-17 00:52:42.998290 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.998296 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.998301 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.998306 | orchestrator | 2025-09-17 00:52:42.998312 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-17 00:52:42.998317 | orchestrator | Wednesday 17 September 2025 00:49:35 +0000 (0:00:01.454) 0:07:49.301 *** 2025-09-17 00:52:42.998322 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:42.998327 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:42.998333 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:42.998338 | orchestrator | 2025-09-17 00:52:42.998343 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-17 00:52:42.998349 | orchestrator | Wednesday 17 September 2025 00:49:37 +0000 (0:00:01.738) 0:07:51.040 *** 2025-09-17 00:52:42.998354 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998359 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.998364 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.998370 | orchestrator | 2025-09-17 00:52:42.998375 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-17 00:52:42.998380 | orchestrator | Wednesday 17 September 2025 00:49:37 +0000 (0:00:00.396) 0:07:51.436 *** 2025-09-17 00:52:42.998385 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998394 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.998400 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.998405 | orchestrator | 2025-09-17 00:52:42.998410 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-17 00:52:42.998415 | orchestrator | Wednesday 17 September 2025 00:49:37 +0000 (0:00:00.371) 0:07:51.808 *** 2025-09-17 00:52:42.998421 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-17 00:52:42.998426 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-17 00:52:42.998431 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-09-17 00:52:42.998436 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-17 00:52:42.998442 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-17 00:52:42.998447 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-09-17 00:52:42.998452 | orchestrator | 2025-09-17 00:52:42.998458 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-17 00:52:42.998463 | orchestrator | Wednesday 17 September 2025 00:49:39 +0000 (0:00:01.334) 0:07:53.142 *** 2025-09-17 00:52:42.998468 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-17 00:52:42.998474 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-17 00:52:42.998479 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-17 00:52:42.998484 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-17 00:52:42.998492 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-17 00:52:42.998497 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-17 00:52:42.998503 | orchestrator | 2025-09-17 00:52:42.998508 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-17 00:52:42.998513 | orchestrator | Wednesday 17 September 2025 00:49:41 +0000 (0:00:02.135) 0:07:55.278 *** 2025-09-17 00:52:42.998519 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-17 00:52:42.998524 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-17 00:52:42.998529 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-17 00:52:42.998534 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-17 00:52:42.998540 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-17 00:52:42.998545 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-17 00:52:42.998550 | orchestrator | 2025-09-17 00:52:42.998556 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-17 00:52:42.998561 | orchestrator | Wednesday 17 September 2025 00:49:44 +0000 (0:00:03.451) 0:07:58.729 *** 2025-09-17 00:52:42.998566 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998571 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.998577 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-17 00:52:42.998582 | orchestrator | 2025-09-17 00:52:42.998587 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-17 00:52:42.998593 | orchestrator | Wednesday 17 September 2025 00:49:48 +0000 (0:00:03.389) 0:08:02.119 *** 2025-09-17 00:52:42.998598 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998603 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.998611 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-17 00:52:42.998617 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-17 00:52:42.998622 | orchestrator | 2025-09-17 00:52:42.998628 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-17 00:52:42.998633 | orchestrator | Wednesday 17 September 2025 00:50:01 +0000 (0:00:12.794) 0:08:14.913 *** 2025-09-17 00:52:42.998638 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998643 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.998648 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.998654 | orchestrator | 2025-09-17 00:52:42.998659 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 00:52:42.998664 | orchestrator | Wednesday 17 September 2025 00:50:01 +0000 (0:00:00.897) 0:08:15.811 *** 2025-09-17 00:52:42.998669 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998679 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.998684 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.998690 | orchestrator | 2025-09-17 00:52:42.998695 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-17 00:52:42.998700 | orchestrator | Wednesday 17 September 2025 00:50:02 +0000 (0:00:00.557) 0:08:16.368 *** 2025-09-17 00:52:42.998706 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:42.998711 | orchestrator | 2025-09-17 00:52:42.998716 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-17 00:52:42.998722 | orchestrator | Wednesday 17 September 2025 00:50:03 +0000 (0:00:00.507) 0:08:16.876 *** 2025-09-17 00:52:42.998727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.998732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.998738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.998743 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998748 | orchestrator | 2025-09-17 00:52:42.998753 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-17 00:52:42.998759 | orchestrator | Wednesday 17 September 2025 00:50:03 +0000 (0:00:00.460) 0:08:17.337 *** 2025-09-17 00:52:42.998764 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998769 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.998774 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.998780 | orchestrator | 2025-09-17 00:52:42.998785 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-17 00:52:42.998790 | orchestrator | Wednesday 17 September 2025 00:50:04 +0000 (0:00:00.535) 0:08:17.873 *** 2025-09-17 00:52:42.998795 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998801 | orchestrator | 2025-09-17 00:52:42.998806 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-17 00:52:42.998811 | orchestrator | Wednesday 17 September 2025 00:50:04 +0000 (0:00:00.222) 0:08:18.095 *** 2025-09-17 00:52:42.998816 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998822 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.998827 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.998832 | orchestrator | 2025-09-17 00:52:42.998837 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-17 00:52:42.998843 | orchestrator | Wednesday 17 September 2025 00:50:04 +0000 (0:00:00.290) 0:08:18.386 *** 2025-09-17 00:52:42.998848 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998853 | orchestrator | 2025-09-17 00:52:42.998859 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-17 00:52:42.998864 | orchestrator | Wednesday 17 September 2025 00:50:04 +0000 (0:00:00.225) 0:08:18.612 *** 2025-09-17 00:52:42.998869 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998874 | orchestrator | 2025-09-17 00:52:42.998880 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-17 00:52:42.998885 | orchestrator | Wednesday 17 September 2025 00:50:04 +0000 (0:00:00.226) 0:08:18.838 *** 2025-09-17 00:52:42.998890 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998895 | orchestrator | 2025-09-17 00:52:42.998901 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-17 00:52:42.998938 | orchestrator | Wednesday 17 September 2025 00:50:05 +0000 (0:00:00.132) 0:08:18.971 *** 2025-09-17 00:52:42.998944 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998949 | orchestrator | 2025-09-17 00:52:42.998957 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-17 00:52:42.998963 | orchestrator | Wednesday 17 September 2025 00:50:05 +0000 (0:00:00.221) 0:08:19.192 *** 2025-09-17 00:52:42.998968 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.998974 | orchestrator | 2025-09-17 00:52:42.998979 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-17 00:52:42.998988 | orchestrator | Wednesday 17 September 2025 00:50:05 +0000 (0:00:00.238) 0:08:19.431 *** 2025-09-17 00:52:42.998994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:42.998999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:42.999004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:42.999008 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999013 | orchestrator | 2025-09-17 00:52:42.999018 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-17 00:52:42.999023 | orchestrator | Wednesday 17 September 2025 00:50:06 +0000 (0:00:00.617) 0:08:20.049 *** 2025-09-17 00:52:42.999027 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999032 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.999037 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.999041 | orchestrator | 2025-09-17 00:52:42.999046 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-17 00:52:42.999051 | orchestrator | Wednesday 17 September 2025 00:50:06 +0000 (0:00:00.555) 0:08:20.604 *** 2025-09-17 00:52:42.999055 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999060 | orchestrator | 2025-09-17 00:52:42.999065 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-17 00:52:42.999074 | orchestrator | Wednesday 17 September 2025 00:50:06 +0000 (0:00:00.228) 0:08:20.833 *** 2025-09-17 00:52:42.999079 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999083 | orchestrator | 2025-09-17 00:52:42.999088 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-17 00:52:42.999093 | orchestrator | 2025-09-17 00:52:42.999097 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 00:52:42.999102 | orchestrator | Wednesday 17 September 2025 00:50:07 +0000 (0:00:00.690) 0:08:21.524 *** 2025-09-17 00:52:42.999107 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.999113 | orchestrator | 2025-09-17 00:52:42.999117 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 00:52:42.999122 | orchestrator | Wednesday 17 September 2025 00:50:08 +0000 (0:00:01.176) 0:08:22.701 *** 2025-09-17 00:52:42.999127 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:42.999132 | orchestrator | 2025-09-17 00:52:42.999136 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 00:52:42.999141 | orchestrator | Wednesday 17 September 2025 00:50:09 +0000 (0:00:01.145) 0:08:23.847 *** 2025-09-17 00:52:42.999146 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999150 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.999155 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.999160 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.999164 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.999169 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.999174 | orchestrator | 2025-09-17 00:52:42.999178 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 00:52:42.999183 | orchestrator | Wednesday 17 September 2025 00:50:11 +0000 (0:00:01.223) 0:08:25.071 *** 2025-09-17 00:52:42.999188 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999192 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.999197 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999202 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.999206 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999211 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.999216 | orchestrator | 2025-09-17 00:52:42.999221 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 00:52:42.999225 | orchestrator | Wednesday 17 September 2025 00:50:11 +0000 (0:00:00.740) 0:08:25.811 *** 2025-09-17 00:52:42.999233 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999238 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999242 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.999247 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.999252 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.999257 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999261 | orchestrator | 2025-09-17 00:52:42.999266 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 00:52:42.999271 | orchestrator | Wednesday 17 September 2025 00:50:12 +0000 (0:00:00.846) 0:08:26.658 *** 2025-09-17 00:52:42.999275 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999280 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.999285 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999289 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999294 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.999299 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.999303 | orchestrator | 2025-09-17 00:52:42.999308 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 00:52:42.999313 | orchestrator | Wednesday 17 September 2025 00:50:13 +0000 (0:00:00.730) 0:08:27.388 *** 2025-09-17 00:52:42.999318 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999322 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.999327 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.999331 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.999336 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.999341 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.999345 | orchestrator | 2025-09-17 00:52:42.999350 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 00:52:42.999355 | orchestrator | Wednesday 17 September 2025 00:50:14 +0000 (0:00:01.284) 0:08:28.673 *** 2025-09-17 00:52:42.999360 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999364 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.999371 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.999376 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999381 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999386 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999390 | orchestrator | 2025-09-17 00:52:42.999395 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 00:52:42.999400 | orchestrator | Wednesday 17 September 2025 00:50:15 +0000 (0:00:00.607) 0:08:29.280 *** 2025-09-17 00:52:42.999404 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999409 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.999414 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.999418 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999423 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999428 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999432 | orchestrator | 2025-09-17 00:52:42.999437 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 00:52:42.999442 | orchestrator | Wednesday 17 September 2025 00:50:16 +0000 (0:00:00.809) 0:08:30.090 *** 2025-09-17 00:52:42.999446 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.999451 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.999456 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.999460 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.999465 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.999470 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.999475 | orchestrator | 2025-09-17 00:52:42.999479 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 00:52:42.999484 | orchestrator | Wednesday 17 September 2025 00:50:17 +0000 (0:00:01.084) 0:08:31.174 *** 2025-09-17 00:52:42.999489 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.999496 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.999501 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.999506 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.999514 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.999518 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.999523 | orchestrator | 2025-09-17 00:52:42.999528 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 00:52:42.999532 | orchestrator | Wednesday 17 September 2025 00:50:18 +0000 (0:00:01.157) 0:08:32.332 *** 2025-09-17 00:52:42.999537 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999542 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.999547 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.999551 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999556 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999561 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999565 | orchestrator | 2025-09-17 00:52:42.999570 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 00:52:42.999575 | orchestrator | Wednesday 17 September 2025 00:50:19 +0000 (0:00:00.582) 0:08:32.914 *** 2025-09-17 00:52:42.999580 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999584 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.999589 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.999594 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.999598 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.999603 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.999608 | orchestrator | 2025-09-17 00:52:42.999612 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 00:52:42.999617 | orchestrator | Wednesday 17 September 2025 00:50:19 +0000 (0:00:00.774) 0:08:33.688 *** 2025-09-17 00:52:42.999622 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.999627 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.999631 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.999636 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999641 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999645 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999650 | orchestrator | 2025-09-17 00:52:42.999655 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 00:52:42.999660 | orchestrator | Wednesday 17 September 2025 00:50:20 +0000 (0:00:00.585) 0:08:34.275 *** 2025-09-17 00:52:42.999664 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.999669 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.999674 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.999678 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999683 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999688 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999692 | orchestrator | 2025-09-17 00:52:42.999697 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 00:52:42.999702 | orchestrator | Wednesday 17 September 2025 00:50:20 +0000 (0:00:00.545) 0:08:34.820 *** 2025-09-17 00:52:42.999707 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.999711 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.999716 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.999721 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999725 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999730 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999735 | orchestrator | 2025-09-17 00:52:42.999739 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 00:52:42.999744 | orchestrator | Wednesday 17 September 2025 00:50:21 +0000 (0:00:00.831) 0:08:35.652 *** 2025-09-17 00:52:42.999749 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999753 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.999758 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.999763 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999767 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999772 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999777 | orchestrator | 2025-09-17 00:52:42.999782 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 00:52:42.999790 | orchestrator | Wednesday 17 September 2025 00:50:22 +0000 (0:00:00.564) 0:08:36.216 *** 2025-09-17 00:52:42.999794 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999799 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.999804 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.999808 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:52:42.999813 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:52:42.999818 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:52:42.999822 | orchestrator | 2025-09-17 00:52:42.999827 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 00:52:42.999832 | orchestrator | Wednesday 17 September 2025 00:50:23 +0000 (0:00:00.798) 0:08:37.014 *** 2025-09-17 00:52:42.999839 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:42.999844 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:42.999849 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:42.999853 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.999858 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.999863 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.999867 | orchestrator | 2025-09-17 00:52:42.999872 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 00:52:42.999877 | orchestrator | Wednesday 17 September 2025 00:50:23 +0000 (0:00:00.580) 0:08:37.595 *** 2025-09-17 00:52:42.999882 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.999886 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.999891 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.999896 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.999901 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.999917 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.999922 | orchestrator | 2025-09-17 00:52:42.999927 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 00:52:42.999931 | orchestrator | Wednesday 17 September 2025 00:50:24 +0000 (0:00:00.812) 0:08:38.407 *** 2025-09-17 00:52:42.999936 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:42.999941 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:42.999946 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:42.999950 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:42.999955 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:42.999960 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:42.999964 | orchestrator | 2025-09-17 00:52:42.999969 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-17 00:52:42.999974 | orchestrator | Wednesday 17 September 2025 00:50:25 +0000 (0:00:01.207) 0:08:39.614 *** 2025-09-17 00:52:42.999981 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 00:52:42.999986 | orchestrator | 2025-09-17 00:52:42.999991 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-17 00:52:42.999996 | orchestrator | Wednesday 17 September 2025 00:50:29 +0000 (0:00:04.069) 0:08:43.684 *** 2025-09-17 00:52:43 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 00:52:43.000005 | orchestrator | 2025-09-17 00:52:43.000010 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-17 00:52:43.000015 | orchestrator | Wednesday 17 September 2025 00:50:31 +0000 (0:00:02.062) 0:08:45.747 *** 2025-09-17 00:52:43.000020 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.000025 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.000029 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.000034 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:43.000039 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:43.000043 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:43.000048 | orchestrator | 2025-09-17 00:52:43.000053 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-17 00:52:43.000058 | orchestrator | Wednesday 17 September 2025 00:50:33 +0000 (0:00:01.787) 0:08:47.535 *** 2025-09-17 00:52:43.000062 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.000067 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.000076 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.000081 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:43.000085 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:43.000090 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:43.000095 | orchestrator | 2025-09-17 00:52:43.000099 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-17 00:52:43.000104 | orchestrator | Wednesday 17 September 2025 00:50:34 +0000 (0:00:01.068) 0:08:48.603 *** 2025-09-17 00:52:43.000109 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:43.000115 | orchestrator | 2025-09-17 00:52:43.000119 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-17 00:52:43.000124 | orchestrator | Wednesday 17 September 2025 00:50:36 +0000 (0:00:01.307) 0:08:49.911 *** 2025-09-17 00:52:43.000129 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.000134 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.000138 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.000143 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:43.000148 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:43.000153 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:43.000157 | orchestrator | 2025-09-17 00:52:43.000162 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-17 00:52:43.000167 | orchestrator | Wednesday 17 September 2025 00:50:37 +0000 (0:00:01.895) 0:08:51.806 *** 2025-09-17 00:52:43.000172 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.000176 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.000181 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:43.000186 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.000190 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:43.000195 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:43.000200 | orchestrator | 2025-09-17 00:52:43.000204 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-17 00:52:43.000209 | orchestrator | Wednesday 17 September 2025 00:50:41 +0000 (0:00:03.637) 0:08:55.444 *** 2025-09-17 00:52:43.000214 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:52:43.000219 | orchestrator | 2025-09-17 00:52:43.000224 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-17 00:52:43.000229 | orchestrator | Wednesday 17 September 2025 00:50:42 +0000 (0:00:01.067) 0:08:56.511 *** 2025-09-17 00:52:43.000233 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000238 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000243 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000248 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:43.000252 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:43.000257 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:43.000262 | orchestrator | 2025-09-17 00:52:43.000266 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-17 00:52:43.000271 | orchestrator | Wednesday 17 September 2025 00:50:43 +0000 (0:00:00.549) 0:08:57.061 *** 2025-09-17 00:52:43.000278 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.000283 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.000288 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.000293 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:52:43.000298 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:52:43.000302 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:52:43.000307 | orchestrator | 2025-09-17 00:52:43.000312 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-17 00:52:43.000317 | orchestrator | Wednesday 17 September 2025 00:50:46 +0000 (0:00:03.426) 0:09:00.487 *** 2025-09-17 00:52:43.000321 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000326 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000334 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000339 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:52:43.000344 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:52:43.000348 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:52:43.000353 | orchestrator | 2025-09-17 00:52:43.000358 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-17 00:52:43.000363 | orchestrator | 2025-09-17 00:52:43.000367 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 00:52:43.000372 | orchestrator | Wednesday 17 September 2025 00:50:47 +0000 (0:00:00.928) 0:09:01.416 *** 2025-09-17 00:52:43.000377 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.000382 | orchestrator | 2025-09-17 00:52:43.000387 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 00:52:43.000391 | orchestrator | Wednesday 17 September 2025 00:50:48 +0000 (0:00:00.452) 0:09:01.868 *** 2025-09-17 00:52:43.000399 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.000404 | orchestrator | 2025-09-17 00:52:43.000408 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 00:52:43.000413 | orchestrator | Wednesday 17 September 2025 00:50:48 +0000 (0:00:00.543) 0:09:02.412 *** 2025-09-17 00:52:43.000418 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.000423 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.000428 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.000432 | orchestrator | 2025-09-17 00:52:43.000437 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 00:52:43.000442 | orchestrator | Wednesday 17 September 2025 00:50:48 +0000 (0:00:00.242) 0:09:02.655 *** 2025-09-17 00:52:43.000447 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000451 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000456 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000461 | orchestrator | 2025-09-17 00:52:43.000465 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 00:52:43.000470 | orchestrator | Wednesday 17 September 2025 00:50:49 +0000 (0:00:00.665) 0:09:03.320 *** 2025-09-17 00:52:43.000475 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000480 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000484 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000489 | orchestrator | 2025-09-17 00:52:43.000494 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 00:52:43.000499 | orchestrator | Wednesday 17 September 2025 00:50:50 +0000 (0:00:00.671) 0:09:03.992 *** 2025-09-17 00:52:43.000503 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000508 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000513 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000517 | orchestrator | 2025-09-17 00:52:43.000522 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 00:52:43.000527 | orchestrator | Wednesday 17 September 2025 00:50:50 +0000 (0:00:00.825) 0:09:04.818 *** 2025-09-17 00:52:43.000532 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.000536 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.000541 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.000546 | orchestrator | 2025-09-17 00:52:43.000551 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 00:52:43.000555 | orchestrator | Wednesday 17 September 2025 00:50:51 +0000 (0:00:00.266) 0:09:05.085 *** 2025-09-17 00:52:43.000560 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.000565 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.000570 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.000574 | orchestrator | 2025-09-17 00:52:43.000579 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 00:52:43.000584 | orchestrator | Wednesday 17 September 2025 00:50:51 +0000 (0:00:00.221) 0:09:05.306 *** 2025-09-17 00:52:43.000592 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.000597 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.000601 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.000606 | orchestrator | 2025-09-17 00:52:43.000611 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 00:52:43.000615 | orchestrator | Wednesday 17 September 2025 00:50:51 +0000 (0:00:00.211) 0:09:05.517 *** 2025-09-17 00:52:43.000620 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000625 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000630 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000634 | orchestrator | 2025-09-17 00:52:43.000639 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 00:52:43.000644 | orchestrator | Wednesday 17 September 2025 00:50:52 +0000 (0:00:00.850) 0:09:06.367 *** 2025-09-17 00:52:43.000649 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000653 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000658 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000663 | orchestrator | 2025-09-17 00:52:43.000668 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 00:52:43.000672 | orchestrator | Wednesday 17 September 2025 00:50:53 +0000 (0:00:00.709) 0:09:07.077 *** 2025-09-17 00:52:43.000677 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.000682 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.000686 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.000691 | orchestrator | 2025-09-17 00:52:43.000696 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 00:52:43.000701 | orchestrator | Wednesday 17 September 2025 00:50:53 +0000 (0:00:00.217) 0:09:07.295 *** 2025-09-17 00:52:43.000708 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.000713 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.000718 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.000722 | orchestrator | 2025-09-17 00:52:43.000727 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 00:52:43.000732 | orchestrator | Wednesday 17 September 2025 00:50:53 +0000 (0:00:00.218) 0:09:07.513 *** 2025-09-17 00:52:43.000737 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000741 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000746 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000751 | orchestrator | 2025-09-17 00:52:43.000756 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 00:52:43.000760 | orchestrator | Wednesday 17 September 2025 00:50:54 +0000 (0:00:00.415) 0:09:07.929 *** 2025-09-17 00:52:43.000765 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000770 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000775 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000779 | orchestrator | 2025-09-17 00:52:43.000784 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 00:52:43.000789 | orchestrator | Wednesday 17 September 2025 00:50:54 +0000 (0:00:00.298) 0:09:08.227 *** 2025-09-17 00:52:43.000794 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000798 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000803 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000808 | orchestrator | 2025-09-17 00:52:43.000812 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 00:52:43.000817 | orchestrator | Wednesday 17 September 2025 00:50:54 +0000 (0:00:00.397) 0:09:08.625 *** 2025-09-17 00:52:43.000822 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.000827 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.000834 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.000839 | orchestrator | 2025-09-17 00:52:43.000844 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 00:52:43.000848 | orchestrator | Wednesday 17 September 2025 00:50:55 +0000 (0:00:00.399) 0:09:09.024 *** 2025-09-17 00:52:43.000853 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.000858 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.000866 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.000871 | orchestrator | 2025-09-17 00:52:43.000875 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 00:52:43.000880 | orchestrator | Wednesday 17 September 2025 00:50:55 +0000 (0:00:00.528) 0:09:09.553 *** 2025-09-17 00:52:43.000885 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.000890 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.000894 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.000899 | orchestrator | 2025-09-17 00:52:43.000916 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 00:52:43.000921 | orchestrator | Wednesday 17 September 2025 00:50:55 +0000 (0:00:00.230) 0:09:09.783 *** 2025-09-17 00:52:43.000926 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000931 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000935 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000940 | orchestrator | 2025-09-17 00:52:43.000945 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 00:52:43.000950 | orchestrator | Wednesday 17 September 2025 00:50:56 +0000 (0:00:00.257) 0:09:10.040 *** 2025-09-17 00:52:43.000954 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.000959 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.000964 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.000968 | orchestrator | 2025-09-17 00:52:43.000973 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-17 00:52:43.000978 | orchestrator | Wednesday 17 September 2025 00:50:56 +0000 (0:00:00.547) 0:09:10.588 *** 2025-09-17 00:52:43.000983 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.000987 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.000992 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-17 00:52:43.000997 | orchestrator | 2025-09-17 00:52:43.001001 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-17 00:52:43.001006 | orchestrator | Wednesday 17 September 2025 00:50:57 +0000 (0:00:00.358) 0:09:10.947 *** 2025-09-17 00:52:43.001011 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 00:52:43.001016 | orchestrator | 2025-09-17 00:52:43.001021 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-17 00:52:43.001025 | orchestrator | Wednesday 17 September 2025 00:50:59 +0000 (0:00:02.088) 0:09:13.035 *** 2025-09-17 00:52:43.001031 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-17 00:52:43.001037 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.001042 | orchestrator | 2025-09-17 00:52:43.001047 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-17 00:52:43.001051 | orchestrator | Wednesday 17 September 2025 00:50:59 +0000 (0:00:00.260) 0:09:13.296 *** 2025-09-17 00:52:43.001057 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 00:52:43.001066 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 00:52:43.001071 | orchestrator | 2025-09-17 00:52:43.001076 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-17 00:52:43.001084 | orchestrator | Wednesday 17 September 2025 00:51:07 +0000 (0:00:08.026) 0:09:21.322 *** 2025-09-17 00:52:43.001089 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 00:52:43.001094 | orchestrator | 2025-09-17 00:52:43.001102 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-17 00:52:43.001107 | orchestrator | Wednesday 17 September 2025 00:51:11 +0000 (0:00:03.740) 0:09:25.062 *** 2025-09-17 00:52:43.001111 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.001116 | orchestrator | 2025-09-17 00:52:43.001121 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-17 00:52:43.001125 | orchestrator | Wednesday 17 September 2025 00:51:11 +0000 (0:00:00.742) 0:09:25.804 *** 2025-09-17 00:52:43.001130 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-17 00:52:43.001135 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-17 00:52:43.001140 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-17 00:52:43.001144 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-17 00:52:43.001149 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-17 00:52:43.001154 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-17 00:52:43.001159 | orchestrator | 2025-09-17 00:52:43.001163 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-17 00:52:43.001171 | orchestrator | Wednesday 17 September 2025 00:51:13 +0000 (0:00:01.118) 0:09:26.923 *** 2025-09-17 00:52:43.001176 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:43.001180 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 00:52:43.001185 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 00:52:43.001190 | orchestrator | 2025-09-17 00:52:43.001194 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-17 00:52:43.001199 | orchestrator | Wednesday 17 September 2025 00:51:15 +0000 (0:00:02.315) 0:09:29.238 *** 2025-09-17 00:52:43.001204 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 00:52:43.001208 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 00:52:43.001213 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.001218 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 00:52:43.001222 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-17 00:52:43.001227 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.001232 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 00:52:43.001237 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-17 00:52:43.001241 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.001246 | orchestrator | 2025-09-17 00:52:43.001251 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-17 00:52:43.001255 | orchestrator | Wednesday 17 September 2025 00:51:16 +0000 (0:00:01.187) 0:09:30.426 *** 2025-09-17 00:52:43.001260 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.001265 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.001270 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.001274 | orchestrator | 2025-09-17 00:52:43.001279 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-17 00:52:43.001284 | orchestrator | Wednesday 17 September 2025 00:51:19 +0000 (0:00:02.780) 0:09:33.207 *** 2025-09-17 00:52:43.001288 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.001293 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.001298 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.001302 | orchestrator | 2025-09-17 00:52:43.001307 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-17 00:52:43.001312 | orchestrator | Wednesday 17 September 2025 00:51:19 +0000 (0:00:00.277) 0:09:33.484 *** 2025-09-17 00:52:43.001317 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.001321 | orchestrator | 2025-09-17 00:52:43.001326 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-17 00:52:43.001334 | orchestrator | Wednesday 17 September 2025 00:51:20 +0000 (0:00:00.493) 0:09:33.978 *** 2025-09-17 00:52:43.001339 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.001344 | orchestrator | 2025-09-17 00:52:43.001349 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-17 00:52:43.001353 | orchestrator | Wednesday 17 September 2025 00:51:20 +0000 (0:00:00.615) 0:09:34.594 *** 2025-09-17 00:52:43.001358 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.001363 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.001368 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.001372 | orchestrator | 2025-09-17 00:52:43.001377 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-17 00:52:43.001382 | orchestrator | Wednesday 17 September 2025 00:51:21 +0000 (0:00:01.207) 0:09:35.802 *** 2025-09-17 00:52:43.001386 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.001391 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.001396 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.001401 | orchestrator | 2025-09-17 00:52:43.001406 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-17 00:52:43.001410 | orchestrator | Wednesday 17 September 2025 00:51:23 +0000 (0:00:01.172) 0:09:36.975 *** 2025-09-17 00:52:43.001415 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.001420 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.001424 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.001429 | orchestrator | 2025-09-17 00:52:43.001434 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-17 00:52:43.001439 | orchestrator | Wednesday 17 September 2025 00:51:25 +0000 (0:00:02.229) 0:09:39.205 *** 2025-09-17 00:52:43.001443 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.001451 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.001456 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.001460 | orchestrator | 2025-09-17 00:52:43.001465 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-17 00:52:43.001470 | orchestrator | Wednesday 17 September 2025 00:51:27 +0000 (0:00:02.064) 0:09:41.269 *** 2025-09-17 00:52:43.001474 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.001479 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.001484 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.001489 | orchestrator | 2025-09-17 00:52:43.001493 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 00:52:43.001498 | orchestrator | Wednesday 17 September 2025 00:51:28 +0000 (0:00:01.488) 0:09:42.758 *** 2025-09-17 00:52:43.001503 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.001507 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.001512 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.001517 | orchestrator | 2025-09-17 00:52:43.001522 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-17 00:52:43.001526 | orchestrator | Wednesday 17 September 2025 00:51:29 +0000 (0:00:00.671) 0:09:43.430 *** 2025-09-17 00:52:43.001531 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.001536 | orchestrator | 2025-09-17 00:52:43.001541 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-17 00:52:43.001545 | orchestrator | Wednesday 17 September 2025 00:51:30 +0000 (0:00:00.474) 0:09:43.904 *** 2025-09-17 00:52:43.001550 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.001555 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.001564 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.001569 | orchestrator | 2025-09-17 00:52:43.001574 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-17 00:52:43.001578 | orchestrator | Wednesday 17 September 2025 00:51:30 +0000 (0:00:00.426) 0:09:44.330 *** 2025-09-17 00:52:43.001587 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.001591 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.001596 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.001601 | orchestrator | 2025-09-17 00:52:43.001606 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-17 00:52:43.001610 | orchestrator | Wednesday 17 September 2025 00:51:31 +0000 (0:00:01.175) 0:09:45.505 *** 2025-09-17 00:52:43.001615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:43.001620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:43.001625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:43.001629 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.001634 | orchestrator | 2025-09-17 00:52:43.001639 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-17 00:52:43.001643 | orchestrator | Wednesday 17 September 2025 00:51:32 +0000 (0:00:00.557) 0:09:46.063 *** 2025-09-17 00:52:43.001648 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.001653 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.001657 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.001662 | orchestrator | 2025-09-17 00:52:43.001667 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-17 00:52:43.001672 | orchestrator | 2025-09-17 00:52:43.001676 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 00:52:43.001681 | orchestrator | Wednesday 17 September 2025 00:51:32 +0000 (0:00:00.471) 0:09:46.535 *** 2025-09-17 00:52:43.001686 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.001691 | orchestrator | 2025-09-17 00:52:43.001695 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 00:52:43.001700 | orchestrator | Wednesday 17 September 2025 00:51:33 +0000 (0:00:00.583) 0:09:47.119 *** 2025-09-17 00:52:43.001705 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.001710 | orchestrator | 2025-09-17 00:52:43.001714 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 00:52:43.001719 | orchestrator | Wednesday 17 September 2025 00:51:33 +0000 (0:00:00.441) 0:09:47.560 *** 2025-09-17 00:52:43.001724 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.001728 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.001733 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.001738 | orchestrator | 2025-09-17 00:52:43.001743 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 00:52:43.001747 | orchestrator | Wednesday 17 September 2025 00:51:34 +0000 (0:00:00.416) 0:09:47.976 *** 2025-09-17 00:52:43.001752 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.001757 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.001762 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.001766 | orchestrator | 2025-09-17 00:52:43.001771 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 00:52:43.001776 | orchestrator | Wednesday 17 September 2025 00:51:34 +0000 (0:00:00.661) 0:09:48.638 *** 2025-09-17 00:52:43.001780 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.001785 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.001790 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.001795 | orchestrator | 2025-09-17 00:52:43.001799 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 00:52:43.001804 | orchestrator | Wednesday 17 September 2025 00:51:35 +0000 (0:00:00.700) 0:09:49.338 *** 2025-09-17 00:52:43.001809 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.001814 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.001818 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.001823 | orchestrator | 2025-09-17 00:52:43.001828 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 00:52:43.001833 | orchestrator | Wednesday 17 September 2025 00:51:36 +0000 (0:00:00.711) 0:09:50.049 *** 2025-09-17 00:52:43.001841 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.001846 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.001851 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.001856 | orchestrator | 2025-09-17 00:52:43.001863 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 00:52:43.001868 | orchestrator | Wednesday 17 September 2025 00:51:36 +0000 (0:00:00.650) 0:09:50.700 *** 2025-09-17 00:52:43.001873 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.001878 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.001882 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.001887 | orchestrator | 2025-09-17 00:52:43.001892 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 00:52:43.001897 | orchestrator | Wednesday 17 September 2025 00:51:37 +0000 (0:00:00.324) 0:09:51.025 *** 2025-09-17 00:52:43.001912 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.001917 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.001922 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.001927 | orchestrator | 2025-09-17 00:52:43.001931 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 00:52:43.001936 | orchestrator | Wednesday 17 September 2025 00:51:37 +0000 (0:00:00.295) 0:09:51.321 *** 2025-09-17 00:52:43.001941 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.001945 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.001950 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.001955 | orchestrator | 2025-09-17 00:52:43.001959 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 00:52:43.001964 | orchestrator | Wednesday 17 September 2025 00:51:38 +0000 (0:00:00.756) 0:09:52.077 *** 2025-09-17 00:52:43.001969 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.001974 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.001978 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.001983 | orchestrator | 2025-09-17 00:52:43.001990 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 00:52:43.001995 | orchestrator | Wednesday 17 September 2025 00:51:39 +0000 (0:00:01.025) 0:09:53.102 *** 2025-09-17 00:52:43.002000 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.002005 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.002009 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.002027 | orchestrator | 2025-09-17 00:52:43.002033 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 00:52:43.002038 | orchestrator | Wednesday 17 September 2025 00:51:39 +0000 (0:00:00.258) 0:09:53.361 *** 2025-09-17 00:52:43.002043 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.002047 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.002052 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.002057 | orchestrator | 2025-09-17 00:52:43.002062 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 00:52:43.002066 | orchestrator | Wednesday 17 September 2025 00:51:39 +0000 (0:00:00.255) 0:09:53.616 *** 2025-09-17 00:52:43.002071 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.002076 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.002080 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.002085 | orchestrator | 2025-09-17 00:52:43.002090 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 00:52:43.002095 | orchestrator | Wednesday 17 September 2025 00:51:40 +0000 (0:00:00.297) 0:09:53.913 *** 2025-09-17 00:52:43.002099 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.002104 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.002109 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.002114 | orchestrator | 2025-09-17 00:52:43.002118 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 00:52:43.002123 | orchestrator | Wednesday 17 September 2025 00:51:40 +0000 (0:00:00.452) 0:09:54.366 *** 2025-09-17 00:52:43.002128 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.002136 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.002141 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.002145 | orchestrator | 2025-09-17 00:52:43.002150 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 00:52:43.002155 | orchestrator | Wednesday 17 September 2025 00:51:40 +0000 (0:00:00.301) 0:09:54.668 *** 2025-09-17 00:52:43.002160 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.002165 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.002169 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.002174 | orchestrator | 2025-09-17 00:52:43.002179 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 00:52:43.002184 | orchestrator | Wednesday 17 September 2025 00:51:41 +0000 (0:00:00.278) 0:09:54.946 *** 2025-09-17 00:52:43.002188 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.002193 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.002198 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.002202 | orchestrator | 2025-09-17 00:52:43.002207 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 00:52:43.002212 | orchestrator | Wednesday 17 September 2025 00:51:41 +0000 (0:00:00.289) 0:09:55.236 *** 2025-09-17 00:52:43.002216 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.002221 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.002226 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.002231 | orchestrator | 2025-09-17 00:52:43.002235 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 00:52:43.002240 | orchestrator | Wednesday 17 September 2025 00:51:41 +0000 (0:00:00.430) 0:09:55.667 *** 2025-09-17 00:52:43.002245 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.002250 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.002254 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.002259 | orchestrator | 2025-09-17 00:52:43.002264 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 00:52:43.002269 | orchestrator | Wednesday 17 September 2025 00:51:42 +0000 (0:00:00.300) 0:09:55.967 *** 2025-09-17 00:52:43.002273 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.002278 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.002283 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.002287 | orchestrator | 2025-09-17 00:52:43.002292 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-17 00:52:43.002297 | orchestrator | Wednesday 17 September 2025 00:51:42 +0000 (0:00:00.523) 0:09:56.490 *** 2025-09-17 00:52:43.002302 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.002306 | orchestrator | 2025-09-17 00:52:43.002311 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-17 00:52:43.002319 | orchestrator | Wednesday 17 September 2025 00:51:43 +0000 (0:00:00.640) 0:09:57.131 *** 2025-09-17 00:52:43.002324 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:43.002328 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 00:52:43.002333 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 00:52:43.002338 | orchestrator | 2025-09-17 00:52:43.002343 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-17 00:52:43.002347 | orchestrator | Wednesday 17 September 2025 00:51:45 +0000 (0:00:02.233) 0:09:59.364 *** 2025-09-17 00:52:43.002352 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 00:52:43.002357 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-17 00:52:43.002361 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 00:52:43.002366 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.002371 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 00:52:43.002376 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.002380 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 00:52:43.002388 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-17 00:52:43.002393 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.002398 | orchestrator | 2025-09-17 00:52:43.002403 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-17 00:52:43.002407 | orchestrator | Wednesday 17 September 2025 00:51:46 +0000 (0:00:01.204) 0:10:00.569 *** 2025-09-17 00:52:43.002412 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.002417 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.002424 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.002429 | orchestrator | 2025-09-17 00:52:43.002434 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-17 00:52:43.002439 | orchestrator | Wednesday 17 September 2025 00:51:46 +0000 (0:00:00.281) 0:10:00.851 *** 2025-09-17 00:52:43.002443 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.002448 | orchestrator | 2025-09-17 00:52:43.002453 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-17 00:52:43.002458 | orchestrator | Wednesday 17 September 2025 00:51:47 +0000 (0:00:00.618) 0:10:01.469 *** 2025-09-17 00:52:43.002462 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:43.002467 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:43.002472 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:43.002477 | orchestrator | 2025-09-17 00:52:43.002482 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-17 00:52:43.002487 | orchestrator | Wednesday 17 September 2025 00:51:48 +0000 (0:00:00.809) 0:10:02.278 *** 2025-09-17 00:52:43.002491 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:43.002496 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-17 00:52:43.002501 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:43.002506 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-17 00:52:43.002510 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:43.002515 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-17 00:52:43.002520 | orchestrator | 2025-09-17 00:52:43.002525 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-17 00:52:43.002529 | orchestrator | Wednesday 17 September 2025 00:51:52 +0000 (0:00:04.040) 0:10:06.319 *** 2025-09-17 00:52:43.002534 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:43.002539 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 00:52:43.002543 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:43.002548 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 00:52:43.002553 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:52:43.002558 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 00:52:43.002562 | orchestrator | 2025-09-17 00:52:43.002567 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-17 00:52:43.002572 | orchestrator | Wednesday 17 September 2025 00:51:55 +0000 (0:00:02.651) 0:10:08.970 *** 2025-09-17 00:52:43.002580 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 00:52:43.002585 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.002589 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 00:52:43.002594 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.002599 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 00:52:43.002604 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.002608 | orchestrator | 2025-09-17 00:52:43.002613 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-17 00:52:43.002620 | orchestrator | Wednesday 17 September 2025 00:51:56 +0000 (0:00:01.397) 0:10:10.367 *** 2025-09-17 00:52:43.002625 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-17 00:52:43.002630 | orchestrator | 2025-09-17 00:52:43.002635 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-17 00:52:43.002639 | orchestrator | Wednesday 17 September 2025 00:51:56 +0000 (0:00:00.262) 0:10:10.630 *** 2025-09-17 00:52:43.002644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 00:52:43.002649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 00:52:43.002654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 00:52:43.002659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 00:52:43.002664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 00:52:43.002669 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.002674 | orchestrator | 2025-09-17 00:52:43.002678 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-17 00:52:43.002686 | orchestrator | Wednesday 17 September 2025 00:51:57 +0000 (0:00:00.699) 0:10:11.329 *** 2025-09-17 00:52:43.002691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 00:52:43.002695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 00:52:43.002700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 00:52:43.002705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 00:52:43.002710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 00:52:43.002715 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.002720 | orchestrator | 2025-09-17 00:52:43.002724 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-17 00:52:43.002729 | orchestrator | Wednesday 17 September 2025 00:51:58 +0000 (0:00:00.790) 0:10:12.120 *** 2025-09-17 00:52:43.002734 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-17 00:52:43.002739 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-17 00:52:43.002744 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-17 00:52:43.002749 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-17 00:52:43.002757 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-17 00:52:43.002761 | orchestrator | 2025-09-17 00:52:43.002766 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-17 00:52:43.002771 | orchestrator | Wednesday 17 September 2025 00:52:29 +0000 (0:00:31.507) 0:10:43.627 *** 2025-09-17 00:52:43.002776 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.002780 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.002785 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.002790 | orchestrator | 2025-09-17 00:52:43.002795 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-17 00:52:43.002799 | orchestrator | Wednesday 17 September 2025 00:52:30 +0000 (0:00:00.536) 0:10:44.163 *** 2025-09-17 00:52:43.002804 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.002809 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.002814 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.002818 | orchestrator | 2025-09-17 00:52:43.002823 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-17 00:52:43.002828 | orchestrator | Wednesday 17 September 2025 00:52:30 +0000 (0:00:00.335) 0:10:44.499 *** 2025-09-17 00:52:43.002833 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.002837 | orchestrator | 2025-09-17 00:52:43.002842 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-17 00:52:43.002847 | orchestrator | Wednesday 17 September 2025 00:52:31 +0000 (0:00:00.554) 0:10:45.054 *** 2025-09-17 00:52:43.002852 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.002857 | orchestrator | 2025-09-17 00:52:43.002863 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-17 00:52:43.002869 | orchestrator | Wednesday 17 September 2025 00:52:31 +0000 (0:00:00.726) 0:10:45.781 *** 2025-09-17 00:52:43.002873 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.002878 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.002883 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.002887 | orchestrator | 2025-09-17 00:52:43.002892 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-17 00:52:43.002897 | orchestrator | Wednesday 17 September 2025 00:52:33 +0000 (0:00:01.316) 0:10:47.097 *** 2025-09-17 00:52:43.002929 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.002935 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.002940 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.002944 | orchestrator | 2025-09-17 00:52:43.002949 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-17 00:52:43.002954 | orchestrator | Wednesday 17 September 2025 00:52:34 +0000 (0:00:01.189) 0:10:48.287 *** 2025-09-17 00:52:43.002959 | orchestrator | changed: [testbed-node-3] 2025-09-17 00:52:43.002963 | orchestrator | changed: [testbed-node-4] 2025-09-17 00:52:43.002968 | orchestrator | changed: [testbed-node-5] 2025-09-17 00:52:43.002973 | orchestrator | 2025-09-17 00:52:43.002978 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-17 00:52:43.002982 | orchestrator | Wednesday 17 September 2025 00:52:36 +0000 (0:00:02.216) 0:10:50.504 *** 2025-09-17 00:52:43.002987 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:43.002995 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:43.003000 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-17 00:52:43.003005 | orchestrator | 2025-09-17 00:52:43.003010 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 00:52:43.003018 | orchestrator | Wednesday 17 September 2025 00:52:39 +0000 (0:00:02.455) 0:10:52.960 *** 2025-09-17 00:52:43.003023 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.003027 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.003032 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.003037 | orchestrator | 2025-09-17 00:52:43.003042 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-17 00:52:43.003046 | orchestrator | Wednesday 17 September 2025 00:52:39 +0000 (0:00:00.548) 0:10:53.509 *** 2025-09-17 00:52:43.003051 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:52:43.003056 | orchestrator | 2025-09-17 00:52:43.003061 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-17 00:52:43.003066 | orchestrator | Wednesday 17 September 2025 00:52:40 +0000 (0:00:00.542) 0:10:54.051 *** 2025-09-17 00:52:43.003070 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.003075 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.003079 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.003084 | orchestrator | 2025-09-17 00:52:43.003088 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-17 00:52:43.003093 | orchestrator | Wednesday 17 September 2025 00:52:40 +0000 (0:00:00.311) 0:10:54.363 *** 2025-09-17 00:52:43.003097 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.003102 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:52:43.003106 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:52:43.003111 | orchestrator | 2025-09-17 00:52:43.003115 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-17 00:52:43.003120 | orchestrator | Wednesday 17 September 2025 00:52:41 +0000 (0:00:00.562) 0:10:54.925 *** 2025-09-17 00:52:43.003124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:52:43.003129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:52:43.003133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:52:43.003138 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:52:43.003142 | orchestrator | 2025-09-17 00:52:43.003147 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-17 00:52:43.003151 | orchestrator | Wednesday 17 September 2025 00:52:41 +0000 (0:00:00.597) 0:10:55.523 *** 2025-09-17 00:52:43.003156 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:52:43.003160 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:52:43.003165 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:52:43.003169 | orchestrator | 2025-09-17 00:52:43.003174 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:52:43.003178 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-17 00:52:43.003183 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-17 00:52:43.003187 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-17 00:52:43.003192 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-17 00:52:43.003197 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-17 00:52:43.003203 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-17 00:52:43.003208 | orchestrator | 2025-09-17 00:52:43.003213 | orchestrator | 2025-09-17 00:52:43.003217 | orchestrator | 2025-09-17 00:52:43.003225 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:52:43.003230 | orchestrator | Wednesday 17 September 2025 00:52:41 +0000 (0:00:00.267) 0:10:55.790 *** 2025-09-17 00:52:43.003234 | orchestrator | =============================================================================== 2025-09-17 00:52:43.003239 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 62.29s 2025-09-17 00:52:43.003243 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.07s 2025-09-17 00:52:43.003248 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.44s 2025-09-17 00:52:43.003254 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.51s 2025-09-17 00:52:43.003261 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.00s 2025-09-17 00:52:43.003268 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.90s 2025-09-17 00:52:43.003275 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.79s 2025-09-17 00:52:43.003284 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.19s 2025-09-17 00:52:43.003290 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.26s 2025-09-17 00:52:43.003299 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.03s 2025-09-17 00:52:43.003309 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.54s 2025-09-17 00:52:43.003316 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.44s 2025-09-17 00:52:43.003322 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.01s 2025-09-17 00:52:43.003329 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.07s 2025-09-17 00:52:43.003335 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.04s 2025-09-17 00:52:43.003342 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.86s 2025-09-17 00:52:43.003349 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.74s 2025-09-17 00:52:43.003355 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.64s 2025-09-17 00:52:43.003363 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.45s 2025-09-17 00:52:43.003370 | orchestrator | ceph-handler : Restart the ceph-crash service --------------------------- 3.43s 2025-09-17 00:52:43.003378 | orchestrator | 2025-09-17 00:52:42 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:43.003385 | orchestrator | 2025-09-17 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:46.030487 | orchestrator | 2025-09-17 00:52:46 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:46.030840 | orchestrator | 2025-09-17 00:52:46 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:52:46.032205 | orchestrator | 2025-09-17 00:52:46 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:46.032243 | orchestrator | 2025-09-17 00:52:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:49.074384 | orchestrator | 2025-09-17 00:52:49 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:49.076442 | orchestrator | 2025-09-17 00:52:49 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:52:49.080261 | orchestrator | 2025-09-17 00:52:49 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:49.080609 | orchestrator | 2025-09-17 00:52:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:52.126462 | orchestrator | 2025-09-17 00:52:52 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:52.127729 | orchestrator | 2025-09-17 00:52:52 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:52:52.129489 | orchestrator | 2025-09-17 00:52:52 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:52.129543 | orchestrator | 2025-09-17 00:52:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:55.180024 | orchestrator | 2025-09-17 00:52:55 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:55.180956 | orchestrator | 2025-09-17 00:52:55 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:52:55.183184 | orchestrator | 2025-09-17 00:52:55 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:55.183217 | orchestrator | 2025-09-17 00:52:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:52:58.219399 | orchestrator | 2025-09-17 00:52:58 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:52:58.220766 | orchestrator | 2025-09-17 00:52:58 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:52:58.222710 | orchestrator | 2025-09-17 00:52:58 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:52:58.222739 | orchestrator | 2025-09-17 00:52:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:01.266983 | orchestrator | 2025-09-17 00:53:01 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:01.268282 | orchestrator | 2025-09-17 00:53:01 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:01.269303 | orchestrator | 2025-09-17 00:53:01 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:53:01.269540 | orchestrator | 2025-09-17 00:53:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:04.319837 | orchestrator | 2025-09-17 00:53:04 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:04.320532 | orchestrator | 2025-09-17 00:53:04 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:04.321622 | orchestrator | 2025-09-17 00:53:04 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:53:04.321744 | orchestrator | 2025-09-17 00:53:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:07.364335 | orchestrator | 2025-09-17 00:53:07 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:07.365823 | orchestrator | 2025-09-17 00:53:07 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:07.366603 | orchestrator | 2025-09-17 00:53:07 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:53:07.366941 | orchestrator | 2025-09-17 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:10.404478 | orchestrator | 2025-09-17 00:53:10 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:10.404991 | orchestrator | 2025-09-17 00:53:10 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:10.406481 | orchestrator | 2025-09-17 00:53:10 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state STARTED 2025-09-17 00:53:10.406594 | orchestrator | 2025-09-17 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:13.450227 | orchestrator | 2025-09-17 00:53:13 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:13.452161 | orchestrator | 2025-09-17 00:53:13 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:13.454879 | orchestrator | 2025-09-17 00:53:13 | INFO  | Task 096f694d-9f82-4e33-b3fd-e20fc7e369c7 is in state SUCCESS 2025-09-17 00:53:13.457318 | orchestrator | 2025-09-17 00:53:13.457353 | orchestrator | 2025-09-17 00:53:13.457365 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:53:13.457377 | orchestrator | 2025-09-17 00:53:13.457388 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:53:13.457399 | orchestrator | Wednesday 17 September 2025 00:50:31 +0000 (0:00:00.259) 0:00:00.259 *** 2025-09-17 00:53:13.457410 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:13.457422 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:53:13.457433 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:53:13.457444 | orchestrator | 2025-09-17 00:53:13.457455 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:53:13.457466 | orchestrator | Wednesday 17 September 2025 00:50:32 +0000 (0:00:00.294) 0:00:00.553 *** 2025-09-17 00:53:13.457477 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-17 00:53:13.457488 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-17 00:53:13.457499 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-17 00:53:13.457510 | orchestrator | 2025-09-17 00:53:13.457520 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-17 00:53:13.457531 | orchestrator | 2025-09-17 00:53:13.457542 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-17 00:53:13.457552 | orchestrator | Wednesday 17 September 2025 00:50:32 +0000 (0:00:00.439) 0:00:00.993 *** 2025-09-17 00:53:13.457564 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:53:13.457575 | orchestrator | 2025-09-17 00:53:13.457587 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-17 00:53:13.457598 | orchestrator | Wednesday 17 September 2025 00:50:32 +0000 (0:00:00.491) 0:00:01.484 *** 2025-09-17 00:53:13.457609 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 00:53:13.457620 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 00:53:13.457631 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 00:53:13.457641 | orchestrator | 2025-09-17 00:53:13.457652 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-17 00:53:13.457663 | orchestrator | Wednesday 17 September 2025 00:50:33 +0000 (0:00:00.742) 0:00:02.227 *** 2025-09-17 00:53:13.457677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.457714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.457755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.457770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.457784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.457802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.457821 | orchestrator | 2025-09-17 00:53:13.457832 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-17 00:53:13.457843 | orchestrator | Wednesday 17 September 2025 00:50:35 +0000 (0:00:01.741) 0:00:03.969 *** 2025-09-17 00:53:13.457854 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:53:13.457865 | orchestrator | 2025-09-17 00:53:13.457876 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-17 00:53:13.457886 | orchestrator | Wednesday 17 September 2025 00:50:35 +0000 (0:00:00.530) 0:00:04.499 *** 2025-09-17 00:53:13.457931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.457946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.457959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.457979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.458007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.458074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.458086 | orchestrator | 2025-09-17 00:53:13.458097 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-17 00:53:13.458109 | orchestrator | Wednesday 17 September 2025 00:50:39 +0000 (0:00:03.382) 0:00:07.882 *** 2025-09-17 00:53:13.458120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 00:53:13.458138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 00:53:13.458157 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:13.458169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 00:53:13.458190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 00:53:13.458203 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:13.458215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 00:53:13.458232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 00:53:13.458251 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:13.458262 | orchestrator | 2025-09-17 00:53:13.458273 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-17 00:53:13.458284 | orchestrator | Wednesday 17 September 2025 00:50:40 +0000 (0:00:00.656) 0:00:08.539 *** 2025-09-17 00:53:13.458295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 00:53:13.458315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 00:53:13.458327 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:13.458339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 00:53:13.458356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 00:53:13.458374 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:13.458386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 00:53:13.458406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 00:53:13.458418 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:13.458430 | orchestrator | 2025-09-17 00:53:13.458441 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-17 00:53:13.458452 | orchestrator | Wednesday 17 September 2025 00:50:40 +0000 (0:00:00.909) 0:00:09.449 *** 2025-09-17 00:53:13.458463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.458482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.458506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.458525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.458538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.458551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.458570 | orchestrator | 2025-09-17 00:53:13.458582 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-17 00:53:13.458593 | orchestrator | Wednesday 17 September 2025 00:50:43 +0000 (0:00:02.368) 0:00:11.818 *** 2025-09-17 00:53:13.458604 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:13.458614 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:53:13.458630 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:53:13.458642 | orchestrator | 2025-09-17 00:53:13.458653 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-17 00:53:13.458663 | orchestrator | Wednesday 17 September 2025 00:50:45 +0000 (0:00:02.475) 0:00:14.294 *** 2025-09-17 00:53:13.458674 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:13.458685 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:53:13.458696 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:53:13.458706 | orchestrator | 2025-09-17 00:53:13.458717 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-17 00:53:13.458728 | orchestrator | Wednesday 17 September 2025 00:50:47 +0000 (0:00:01.671) 0:00:15.965 *** 2025-09-17 00:53:13.458739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.458758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.458770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 00:53:13.458788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.458806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.458826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 00:53:13.458838 | orchestrator | 2025-09-17 00:53:13.458849 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-17 00:53:13.458860 | orchestrator | Wednesday 17 September 2025 00:50:49 +0000 (0:00:02.011) 0:00:17.977 *** 2025-09-17 00:53:13.458871 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:13.458888 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:13.458899 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:13.458928 | orchestrator | 2025-09-17 00:53:13.458940 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-17 00:53:13.458951 | orchestrator | Wednesday 17 September 2025 00:50:49 +0000 (0:00:00.260) 0:00:18.237 *** 2025-09-17 00:53:13.458962 | orchestrator | 2025-09-17 00:53:13.458973 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-17 00:53:13.458984 | orchestrator | Wednesday 17 September 2025 00:50:49 +0000 (0:00:00.058) 0:00:18.296 *** 2025-09-17 00:53:13.458995 | orchestrator | 2025-09-17 00:53:13.459006 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-17 00:53:13.459016 | orchestrator | Wednesday 17 September 2025 00:50:49 +0000 (0:00:00.058) 0:00:18.355 *** 2025-09-17 00:53:13.459027 | orchestrator | 2025-09-17 00:53:13.459038 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-17 00:53:13.459049 | orchestrator | Wednesday 17 September 2025 00:50:49 +0000 (0:00:00.058) 0:00:18.414 *** 2025-09-17 00:53:13.459060 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:13.459070 | orchestrator | 2025-09-17 00:53:13.459081 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-17 00:53:13.459092 | orchestrator | Wednesday 17 September 2025 00:50:50 +0000 (0:00:00.184) 0:00:18.598 *** 2025-09-17 00:53:13.459103 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:13.459113 | orchestrator | 2025-09-17 00:53:13.459124 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-17 00:53:13.459135 | orchestrator | Wednesday 17 September 2025 00:50:50 +0000 (0:00:00.463) 0:00:19.061 *** 2025-09-17 00:53:13.459146 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:13.459157 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:53:13.459167 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:53:13.459178 | orchestrator | 2025-09-17 00:53:13.459189 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-17 00:53:13.459200 | orchestrator | Wednesday 17 September 2025 00:51:49 +0000 (0:00:58.998) 0:01:18.060 *** 2025-09-17 00:53:13.459210 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:13.459221 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:53:13.459232 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:53:13.459242 | orchestrator | 2025-09-17 00:53:13.459253 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-17 00:53:13.459264 | orchestrator | Wednesday 17 September 2025 00:53:01 +0000 (0:01:11.518) 0:02:29.578 *** 2025-09-17 00:53:13.459275 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:53:13.459285 | orchestrator | 2025-09-17 00:53:13.459301 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-17 00:53:13.459312 | orchestrator | Wednesday 17 September 2025 00:53:01 +0000 (0:00:00.495) 0:02:30.074 *** 2025-09-17 00:53:13.459323 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:13.459334 | orchestrator | 2025-09-17 00:53:13.459345 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-17 00:53:13.459356 | orchestrator | Wednesday 17 September 2025 00:53:04 +0000 (0:00:02.889) 0:02:32.963 *** 2025-09-17 00:53:13.459366 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:13.459377 | orchestrator | 2025-09-17 00:53:13.459388 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-17 00:53:13.459398 | orchestrator | Wednesday 17 September 2025 00:53:06 +0000 (0:00:02.262) 0:02:35.226 *** 2025-09-17 00:53:13.459409 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:13.459420 | orchestrator | 2025-09-17 00:53:13.459431 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-17 00:53:13.459442 | orchestrator | Wednesday 17 September 2025 00:53:09 +0000 (0:00:02.724) 0:02:37.951 *** 2025-09-17 00:53:13.459453 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:13.459470 | orchestrator | 2025-09-17 00:53:13.459481 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:53:13.459492 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 00:53:13.459504 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 00:53:13.459515 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 00:53:13.459526 | orchestrator | 2025-09-17 00:53:13.459537 | orchestrator | 2025-09-17 00:53:13.459548 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:53:13.459564 | orchestrator | Wednesday 17 September 2025 00:53:12 +0000 (0:00:02.723) 0:02:40.675 *** 2025-09-17 00:53:13.459576 | orchestrator | =============================================================================== 2025-09-17 00:53:13.459586 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 71.52s 2025-09-17 00:53:13.459597 | orchestrator | opensearch : Restart opensearch container ------------------------------ 59.00s 2025-09-17 00:53:13.459608 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.38s 2025-09-17 00:53:13.459619 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.89s 2025-09-17 00:53:13.459629 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.72s 2025-09-17 00:53:13.459640 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.72s 2025-09-17 00:53:13.459651 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.48s 2025-09-17 00:53:13.459662 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.37s 2025-09-17 00:53:13.459672 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.26s 2025-09-17 00:53:13.459683 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.01s 2025-09-17 00:53:13.459694 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.74s 2025-09-17 00:53:13.459704 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.67s 2025-09-17 00:53:13.459715 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.91s 2025-09-17 00:53:13.459726 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.74s 2025-09-17 00:53:13.459737 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.66s 2025-09-17 00:53:13.459747 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-09-17 00:53:13.459758 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2025-09-17 00:53:13.459769 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2025-09-17 00:53:13.459779 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.46s 2025-09-17 00:53:13.459790 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-17 00:53:13.459801 | orchestrator | 2025-09-17 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:16.503152 | orchestrator | 2025-09-17 00:53:16 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:16.504750 | orchestrator | 2025-09-17 00:53:16 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:16.504781 | orchestrator | 2025-09-17 00:53:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:19.556448 | orchestrator | 2025-09-17 00:53:19 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:19.558210 | orchestrator | 2025-09-17 00:53:19 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:19.558278 | orchestrator | 2025-09-17 00:53:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:22.599866 | orchestrator | 2025-09-17 00:53:22 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:22.602100 | orchestrator | 2025-09-17 00:53:22 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:22.602143 | orchestrator | 2025-09-17 00:53:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:25.643758 | orchestrator | 2025-09-17 00:53:25 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:25.645003 | orchestrator | 2025-09-17 00:53:25 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:25.645125 | orchestrator | 2025-09-17 00:53:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:28.689255 | orchestrator | 2025-09-17 00:53:28 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:28.690381 | orchestrator | 2025-09-17 00:53:28 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:28.690507 | orchestrator | 2025-09-17 00:53:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:31.735305 | orchestrator | 2025-09-17 00:53:31 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:31.737281 | orchestrator | 2025-09-17 00:53:31 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:31.737312 | orchestrator | 2025-09-17 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:34.776264 | orchestrator | 2025-09-17 00:53:34 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state STARTED 2025-09-17 00:53:34.777599 | orchestrator | 2025-09-17 00:53:34 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:34.777630 | orchestrator | 2025-09-17 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:37.832549 | orchestrator | 2025-09-17 00:53:37 | INFO  | Task f9af0a1a-0a60-481d-88ea-42ff74eaf73b is in state SUCCESS 2025-09-17 00:53:37.834738 | orchestrator | 2025-09-17 00:53:37.834777 | orchestrator | 2025-09-17 00:53:37.834789 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-17 00:53:37.834800 | orchestrator | 2025-09-17 00:53:37.834810 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-17 00:53:37.834820 | orchestrator | Wednesday 17 September 2025 00:50:31 +0000 (0:00:00.098) 0:00:00.098 *** 2025-09-17 00:53:37.834830 | orchestrator | ok: [localhost] => { 2025-09-17 00:53:37.834842 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-17 00:53:37.834852 | orchestrator | } 2025-09-17 00:53:37.834862 | orchestrator | 2025-09-17 00:53:37.834872 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-17 00:53:37.834882 | orchestrator | Wednesday 17 September 2025 00:50:31 +0000 (0:00:00.058) 0:00:00.156 *** 2025-09-17 00:53:37.834891 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-17 00:53:37.834903 | orchestrator | ...ignoring 2025-09-17 00:53:37.834933 | orchestrator | 2025-09-17 00:53:37.834944 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-17 00:53:37.834954 | orchestrator | Wednesday 17 September 2025 00:50:34 +0000 (0:00:02.859) 0:00:03.015 *** 2025-09-17 00:53:37.834964 | orchestrator | skipping: [localhost] 2025-09-17 00:53:37.834973 | orchestrator | 2025-09-17 00:53:37.834983 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-17 00:53:37.834993 | orchestrator | Wednesday 17 September 2025 00:50:34 +0000 (0:00:00.061) 0:00:03.077 *** 2025-09-17 00:53:37.835210 | orchestrator | ok: [localhost] 2025-09-17 00:53:37.835225 | orchestrator | 2025-09-17 00:53:37.835235 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:53:37.835244 | orchestrator | 2025-09-17 00:53:37.835253 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:53:37.835263 | orchestrator | Wednesday 17 September 2025 00:50:34 +0000 (0:00:00.145) 0:00:03.222 *** 2025-09-17 00:53:37.835272 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.835282 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:53:37.835291 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:53:37.835300 | orchestrator | 2025-09-17 00:53:37.835311 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:53:37.835321 | orchestrator | Wednesday 17 September 2025 00:50:35 +0000 (0:00:00.309) 0:00:03.532 *** 2025-09-17 00:53:37.835330 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-17 00:53:37.835340 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-17 00:53:37.835349 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-17 00:53:37.835359 | orchestrator | 2025-09-17 00:53:37.835368 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-17 00:53:37.835377 | orchestrator | 2025-09-17 00:53:37.835387 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-17 00:53:37.835396 | orchestrator | Wednesday 17 September 2025 00:50:35 +0000 (0:00:00.538) 0:00:04.071 *** 2025-09-17 00:53:37.835406 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 00:53:37.835415 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-17 00:53:37.835425 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-17 00:53:37.835434 | orchestrator | 2025-09-17 00:53:37.835444 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-17 00:53:37.835453 | orchestrator | Wednesday 17 September 2025 00:50:35 +0000 (0:00:00.350) 0:00:04.421 *** 2025-09-17 00:53:37.835476 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:53:37.835487 | orchestrator | 2025-09-17 00:53:37.835496 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-17 00:53:37.835506 | orchestrator | Wednesday 17 September 2025 00:50:36 +0000 (0:00:00.683) 0:00:05.104 *** 2025-09-17 00:53:37.835532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 00:53:37.835559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 00:53:37.835577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 00:53:37.835588 | orchestrator | 2025-09-17 00:53:37.835605 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-17 00:53:37.835622 | orchestrator | Wednesday 17 September 2025 00:50:39 +0000 (0:00:03.273) 0:00:08.378 *** 2025-09-17 00:53:37.835632 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.835642 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.835651 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.835661 | orchestrator | 2025-09-17 00:53:37.835670 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-17 00:53:37.835680 | orchestrator | Wednesday 17 September 2025 00:50:40 +0000 (0:00:00.588) 0:00:08.966 *** 2025-09-17 00:53:37.835690 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.835699 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.835709 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.835718 | orchestrator | 2025-09-17 00:53:37.835728 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-17 00:53:37.835737 | orchestrator | Wednesday 17 September 2025 00:50:41 +0000 (0:00:01.329) 0:00:10.296 *** 2025-09-17 00:53:37.835748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 00:53:37.835770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 00:53:37.835788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 00:53:37.835799 | orchestrator | 2025-09-17 00:53:37.835811 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-17 00:53:37.835823 | orchestrator | Wednesday 17 September 2025 00:50:45 +0000 (0:00:03.391) 0:00:13.688 *** 2025-09-17 00:53:37.835834 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.835845 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.835856 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.835867 | orchestrator | 2025-09-17 00:53:37.835878 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-17 00:53:37.835889 | orchestrator | Wednesday 17 September 2025 00:50:46 +0000 (0:00:01.048) 0:00:14.736 *** 2025-09-17 00:53:37.835935 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.835947 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:53:37.835958 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:53:37.835969 | orchestrator | 2025-09-17 00:53:37.835980 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-17 00:53:37.836074 | orchestrator | Wednesday 17 September 2025 00:50:50 +0000 (0:00:03.819) 0:00:18.555 *** 2025-09-17 00:53:37.836089 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:53:37.836100 | orchestrator | 2025-09-17 00:53:37.836111 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-17 00:53:37.836122 | orchestrator | Wednesday 17 September 2025 00:50:50 +0000 (0:00:00.448) 0:00:19.004 *** 2025-09-17 00:53:37.836145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:53:37.836166 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.836182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:53:37.836194 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.836210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:53:37.836228 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.836238 | orchestrator | 2025-09-17 00:53:37.836247 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-17 00:53:37.836257 | orchestrator | Wednesday 17 September 2025 00:50:53 +0000 (0:00:03.377) 0:00:22.382 *** 2025-09-17 00:53:37.836267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:53:37.836278 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.836297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:53:37.836316 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.836327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:53:37.836338 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.836347 | orchestrator | 2025-09-17 00:53:37.836357 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-17 00:53:37.836366 | orchestrator | Wednesday 17 September 2025 00:50:56 +0000 (0:00:02.229) 0:00:24.612 *** 2025-09-17 00:53:37.836381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:53:37.836404 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.836423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:53:37.836435 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.836450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 00:53:37.836467 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.836478 | orchestrator | 2025-09-17 00:53:37.836487 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-17 00:53:37.836497 | orchestrator | Wednesday 17 September 2025 00:50:58 +0000 (0:00:02.435) 0:00:27.047 *** 2025-09-17 00:53:37.836515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 00:53:37.836532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 00:53:37.836559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 00:53:37.836572 | orchestrator | 2025-09-17 00:53:37.836582 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-17 00:53:37.836591 | orchestrator | Wednesday 17 September 2025 00:51:01 +0000 (0:00:02.923) 0:00:29.971 *** 2025-09-17 00:53:37.836600 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.836610 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:53:37.836619 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:53:37.836629 | orchestrator | 2025-09-17 00:53:37.836638 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-17 00:53:37.836648 | orchestrator | Wednesday 17 September 2025 00:51:02 +0000 (0:00:00.834) 0:00:30.805 *** 2025-09-17 00:53:37.836657 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.836667 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:53:37.836676 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:53:37.836686 | orchestrator | 2025-09-17 00:53:37.836695 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-17 00:53:37.836705 | orchestrator | Wednesday 17 September 2025 00:51:02 +0000 (0:00:00.462) 0:00:31.268 *** 2025-09-17 00:53:37.836714 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.836730 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:53:37.836739 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:53:37.836749 | orchestrator | 2025-09-17 00:53:37.836758 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-17 00:53:37.836768 | orchestrator | Wednesday 17 September 2025 00:51:03 +0000 (0:00:00.327) 0:00:31.596 *** 2025-09-17 00:53:37.836778 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-17 00:53:37.836788 | orchestrator | ...ignoring 2025-09-17 00:53:37.836798 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-17 00:53:37.836807 | orchestrator | ...ignoring 2025-09-17 00:53:37.836821 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-17 00:53:37.836831 | orchestrator | ...ignoring 2025-09-17 00:53:37.836841 | orchestrator | 2025-09-17 00:53:37.836850 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-17 00:53:37.836860 | orchestrator | Wednesday 17 September 2025 00:51:14 +0000 (0:00:11.061) 0:00:42.658 *** 2025-09-17 00:53:37.836869 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.836879 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:53:37.836888 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:53:37.836897 | orchestrator | 2025-09-17 00:53:37.836936 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-17 00:53:37.836946 | orchestrator | Wednesday 17 September 2025 00:51:14 +0000 (0:00:00.400) 0:00:43.058 *** 2025-09-17 00:53:37.836956 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.836965 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.836975 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.836984 | orchestrator | 2025-09-17 00:53:37.836994 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-17 00:53:37.837003 | orchestrator | Wednesday 17 September 2025 00:51:15 +0000 (0:00:00.599) 0:00:43.658 *** 2025-09-17 00:53:37.837013 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.837022 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.837031 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.837041 | orchestrator | 2025-09-17 00:53:37.837050 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-17 00:53:37.837060 | orchestrator | Wednesday 17 September 2025 00:51:15 +0000 (0:00:00.427) 0:00:44.085 *** 2025-09-17 00:53:37.837069 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.837078 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.837088 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.837097 | orchestrator | 2025-09-17 00:53:37.837107 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-17 00:53:37.837116 | orchestrator | Wednesday 17 September 2025 00:51:16 +0000 (0:00:00.394) 0:00:44.480 *** 2025-09-17 00:53:37.837125 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.837135 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:53:37.837144 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:53:37.837154 | orchestrator | 2025-09-17 00:53:37.837163 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-17 00:53:37.837173 | orchestrator | Wednesday 17 September 2025 00:51:16 +0000 (0:00:00.420) 0:00:44.901 *** 2025-09-17 00:53:37.837187 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.837197 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.837207 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.837216 | orchestrator | 2025-09-17 00:53:37.837226 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-17 00:53:37.837235 | orchestrator | Wednesday 17 September 2025 00:51:16 +0000 (0:00:00.505) 0:00:45.406 *** 2025-09-17 00:53:37.837245 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.837261 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.837270 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-17 00:53:37.837280 | orchestrator | 2025-09-17 00:53:37.837289 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-17 00:53:37.837299 | orchestrator | Wednesday 17 September 2025 00:51:17 +0000 (0:00:00.343) 0:00:45.750 *** 2025-09-17 00:53:37.837308 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.837318 | orchestrator | 2025-09-17 00:53:37.837327 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-17 00:53:37.837337 | orchestrator | Wednesday 17 September 2025 00:51:27 +0000 (0:00:09.775) 0:00:55.525 *** 2025-09-17 00:53:37.837346 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.837356 | orchestrator | 2025-09-17 00:53:37.837365 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-17 00:53:37.837375 | orchestrator | Wednesday 17 September 2025 00:51:27 +0000 (0:00:00.116) 0:00:55.642 *** 2025-09-17 00:53:37.837384 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.837394 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.837403 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.837412 | orchestrator | 2025-09-17 00:53:37.837422 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-17 00:53:37.837431 | orchestrator | Wednesday 17 September 2025 00:51:28 +0000 (0:00:00.923) 0:00:56.565 *** 2025-09-17 00:53:37.837441 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.837450 | orchestrator | 2025-09-17 00:53:37.837460 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-17 00:53:37.837469 | orchestrator | Wednesday 17 September 2025 00:51:35 +0000 (0:00:07.192) 0:01:03.758 *** 2025-09-17 00:53:37.837479 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.837488 | orchestrator | 2025-09-17 00:53:37.837498 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-17 00:53:37.837507 | orchestrator | Wednesday 17 September 2025 00:51:36 +0000 (0:00:01.681) 0:01:05.439 *** 2025-09-17 00:53:37.837516 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.837526 | orchestrator | 2025-09-17 00:53:37.837535 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-17 00:53:37.837545 | orchestrator | Wednesday 17 September 2025 00:51:39 +0000 (0:00:02.497) 0:01:07.937 *** 2025-09-17 00:53:37.837554 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.837564 | orchestrator | 2025-09-17 00:53:37.837573 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-17 00:53:37.837583 | orchestrator | Wednesday 17 September 2025 00:51:39 +0000 (0:00:00.109) 0:01:08.047 *** 2025-09-17 00:53:37.837592 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.837602 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.837611 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.837621 | orchestrator | 2025-09-17 00:53:37.837706 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-17 00:53:37.837719 | orchestrator | Wednesday 17 September 2025 00:51:39 +0000 (0:00:00.280) 0:01:08.327 *** 2025-09-17 00:53:37.837729 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.837739 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-17 00:53:37.837754 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:53:37.837823 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:53:37.837837 | orchestrator | 2025-09-17 00:53:37.837846 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-17 00:53:37.837856 | orchestrator | skipping: no hosts matched 2025-09-17 00:53:37.837866 | orchestrator | 2025-09-17 00:53:37.837875 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-17 00:53:37.837885 | orchestrator | 2025-09-17 00:53:37.837894 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-17 00:53:37.837903 | orchestrator | Wednesday 17 September 2025 00:51:40 +0000 (0:00:00.446) 0:01:08.773 *** 2025-09-17 00:53:37.837964 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:53:37.837974 | orchestrator | 2025-09-17 00:53:37.837984 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-17 00:53:37.837994 | orchestrator | Wednesday 17 September 2025 00:51:56 +0000 (0:00:16.476) 0:01:25.249 *** 2025-09-17 00:53:37.838003 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:53:37.838052 | orchestrator | 2025-09-17 00:53:37.838065 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-17 00:53:37.838075 | orchestrator | Wednesday 17 September 2025 00:52:17 +0000 (0:00:20.625) 0:01:45.875 *** 2025-09-17 00:53:37.838084 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:53:37.838094 | orchestrator | 2025-09-17 00:53:37.838104 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-17 00:53:37.838113 | orchestrator | 2025-09-17 00:53:37.838123 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-17 00:53:37.838132 | orchestrator | Wednesday 17 September 2025 00:52:19 +0000 (0:00:02.424) 0:01:48.299 *** 2025-09-17 00:53:37.838142 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:53:37.838151 | orchestrator | 2025-09-17 00:53:37.838161 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-17 00:53:37.838170 | orchestrator | Wednesday 17 September 2025 00:52:39 +0000 (0:00:19.407) 0:02:07.707 *** 2025-09-17 00:53:37.838180 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:53:37.838189 | orchestrator | 2025-09-17 00:53:37.838199 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-17 00:53:37.838208 | orchestrator | Wednesday 17 September 2025 00:52:59 +0000 (0:00:20.613) 0:02:28.320 *** 2025-09-17 00:53:37.838218 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:53:37.838228 | orchestrator | 2025-09-17 00:53:37.838237 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-17 00:53:37.838246 | orchestrator | 2025-09-17 00:53:37.838264 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-17 00:53:37.838274 | orchestrator | Wednesday 17 September 2025 00:53:02 +0000 (0:00:02.483) 0:02:30.804 *** 2025-09-17 00:53:37.838283 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.838293 | orchestrator | 2025-09-17 00:53:37.838302 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-17 00:53:37.838312 | orchestrator | Wednesday 17 September 2025 00:53:19 +0000 (0:00:16.831) 0:02:47.636 *** 2025-09-17 00:53:37.838321 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.838330 | orchestrator | 2025-09-17 00:53:37.838340 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-17 00:53:37.838349 | orchestrator | Wednesday 17 September 2025 00:53:19 +0000 (0:00:00.533) 0:02:48.169 *** 2025-09-17 00:53:37.838359 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.838368 | orchestrator | 2025-09-17 00:53:37.838377 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-17 00:53:37.838387 | orchestrator | 2025-09-17 00:53:37.838396 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-17 00:53:37.838406 | orchestrator | Wednesday 17 September 2025 00:53:22 +0000 (0:00:02.710) 0:02:50.879 *** 2025-09-17 00:53:37.838415 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:53:37.838425 | orchestrator | 2025-09-17 00:53:37.838434 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-17 00:53:37.838444 | orchestrator | Wednesday 17 September 2025 00:53:22 +0000 (0:00:00.496) 0:02:51.376 *** 2025-09-17 00:53:37.838453 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.838463 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.838474 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.838485 | orchestrator | 2025-09-17 00:53:37.838495 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-17 00:53:37.838506 | orchestrator | Wednesday 17 September 2025 00:53:25 +0000 (0:00:02.319) 0:02:53.695 *** 2025-09-17 00:53:37.838525 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.838536 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.838547 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.838557 | orchestrator | 2025-09-17 00:53:37.838568 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-17 00:53:37.838578 | orchestrator | Wednesday 17 September 2025 00:53:27 +0000 (0:00:02.344) 0:02:56.039 *** 2025-09-17 00:53:37.838589 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.838599 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.838610 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.838620 | orchestrator | 2025-09-17 00:53:37.838631 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-17 00:53:37.838641 | orchestrator | Wednesday 17 September 2025 00:53:29 +0000 (0:00:02.272) 0:02:58.312 *** 2025-09-17 00:53:37.838652 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.838663 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.838674 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:53:37.838684 | orchestrator | 2025-09-17 00:53:37.838695 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-17 00:53:37.838706 | orchestrator | Wednesday 17 September 2025 00:53:32 +0000 (0:00:02.348) 0:03:00.660 *** 2025-09-17 00:53:37.838717 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:53:37.838728 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:53:37.838738 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:53:37.838749 | orchestrator | 2025-09-17 00:53:37.838760 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-17 00:53:37.838770 | orchestrator | Wednesday 17 September 2025 00:53:34 +0000 (0:00:02.674) 0:03:03.335 *** 2025-09-17 00:53:37.838781 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:53:37.838797 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:53:37.838808 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:53:37.838819 | orchestrator | 2025-09-17 00:53:37.838828 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:53:37.838838 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-17 00:53:37.838848 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-17 00:53:37.838860 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-17 00:53:37.838869 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-17 00:53:37.838879 | orchestrator | 2025-09-17 00:53:37.838889 | orchestrator | 2025-09-17 00:53:37.838898 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:53:37.838951 | orchestrator | Wednesday 17 September 2025 00:53:35 +0000 (0:00:00.326) 0:03:03.661 *** 2025-09-17 00:53:37.838962 | orchestrator | =============================================================================== 2025-09-17 00:53:37.838972 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.24s 2025-09-17 00:53:37.838981 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.88s 2025-09-17 00:53:37.838989 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.83s 2025-09-17 00:53:37.838997 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.06s 2025-09-17 00:53:37.839005 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.78s 2025-09-17 00:53:37.839013 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.19s 2025-09-17 00:53:37.839025 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.91s 2025-09-17 00:53:37.839034 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.82s 2025-09-17 00:53:37.839048 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.39s 2025-09-17 00:53:37.839056 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.38s 2025-09-17 00:53:37.839064 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.27s 2025-09-17 00:53:37.839071 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.92s 2025-09-17 00:53:37.839079 | orchestrator | Check MariaDB service --------------------------------------------------- 2.86s 2025-09-17 00:53:37.839087 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.71s 2025-09-17 00:53:37.839095 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.67s 2025-09-17 00:53:37.839103 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.50s 2025-09-17 00:53:37.839111 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.44s 2025-09-17 00:53:37.839119 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.35s 2025-09-17 00:53:37.839127 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.34s 2025-09-17 00:53:37.839134 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.32s 2025-09-17 00:53:37.839142 | orchestrator | 2025-09-17 00:53:37 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:37.839151 | orchestrator | 2025-09-17 00:53:37 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:53:37.839158 | orchestrator | 2025-09-17 00:53:37 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:53:37.839166 | orchestrator | 2025-09-17 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:40.878414 | orchestrator | 2025-09-17 00:53:40 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:40.880630 | orchestrator | 2025-09-17 00:53:40 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:53:40.882601 | orchestrator | 2025-09-17 00:53:40 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:53:40.882629 | orchestrator | 2025-09-17 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:43.921956 | orchestrator | 2025-09-17 00:53:43 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:43.923063 | orchestrator | 2025-09-17 00:53:43 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:53:43.924951 | orchestrator | 2025-09-17 00:53:43 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:53:43.925316 | orchestrator | 2025-09-17 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:46.958865 | orchestrator | 2025-09-17 00:53:46 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:46.959301 | orchestrator | 2025-09-17 00:53:46 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:53:46.960281 | orchestrator | 2025-09-17 00:53:46 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:53:46.960310 | orchestrator | 2025-09-17 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:50.013515 | orchestrator | 2025-09-17 00:53:50 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:50.017572 | orchestrator | 2025-09-17 00:53:50 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:53:50.020226 | orchestrator | 2025-09-17 00:53:50 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:53:50.020293 | orchestrator | 2025-09-17 00:53:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:53.070819 | orchestrator | 2025-09-17 00:53:53 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:53.071072 | orchestrator | 2025-09-17 00:53:53 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:53:53.072828 | orchestrator | 2025-09-17 00:53:53 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:53:53.072863 | orchestrator | 2025-09-17 00:53:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:56.120587 | orchestrator | 2025-09-17 00:53:56 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:56.121097 | orchestrator | 2025-09-17 00:53:56 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:53:56.122269 | orchestrator | 2025-09-17 00:53:56 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:53:56.122368 | orchestrator | 2025-09-17 00:53:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:53:59.165441 | orchestrator | 2025-09-17 00:53:59 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:53:59.166865 | orchestrator | 2025-09-17 00:53:59 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:53:59.168516 | orchestrator | 2025-09-17 00:53:59 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:53:59.168549 | orchestrator | 2025-09-17 00:53:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:02.203494 | orchestrator | 2025-09-17 00:54:02 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:02.204315 | orchestrator | 2025-09-17 00:54:02 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:02.205610 | orchestrator | 2025-09-17 00:54:02 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:02.205759 | orchestrator | 2025-09-17 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:05.236791 | orchestrator | 2025-09-17 00:54:05 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:05.236898 | orchestrator | 2025-09-17 00:54:05 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:05.236959 | orchestrator | 2025-09-17 00:54:05 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:05.236971 | orchestrator | 2025-09-17 00:54:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:08.264639 | orchestrator | 2025-09-17 00:54:08 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:08.266498 | orchestrator | 2025-09-17 00:54:08 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:08.269108 | orchestrator | 2025-09-17 00:54:08 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:08.269146 | orchestrator | 2025-09-17 00:54:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:11.312669 | orchestrator | 2025-09-17 00:54:11 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:11.313520 | orchestrator | 2025-09-17 00:54:11 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:11.315751 | orchestrator | 2025-09-17 00:54:11 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:11.315779 | orchestrator | 2025-09-17 00:54:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:14.361490 | orchestrator | 2025-09-17 00:54:14 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:14.363243 | orchestrator | 2025-09-17 00:54:14 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:14.365056 | orchestrator | 2025-09-17 00:54:14 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:14.365088 | orchestrator | 2025-09-17 00:54:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:17.412600 | orchestrator | 2025-09-17 00:54:17 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:17.412973 | orchestrator | 2025-09-17 00:54:17 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:17.413736 | orchestrator | 2025-09-17 00:54:17 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:17.415311 | orchestrator | 2025-09-17 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:20.452719 | orchestrator | 2025-09-17 00:54:20 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:20.454002 | orchestrator | 2025-09-17 00:54:20 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:20.454950 | orchestrator | 2025-09-17 00:54:20 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:20.454974 | orchestrator | 2025-09-17 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:23.502299 | orchestrator | 2025-09-17 00:54:23 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:23.504108 | orchestrator | 2025-09-17 00:54:23 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:23.505723 | orchestrator | 2025-09-17 00:54:23 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:23.505744 | orchestrator | 2025-09-17 00:54:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:26.550504 | orchestrator | 2025-09-17 00:54:26 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:26.552436 | orchestrator | 2025-09-17 00:54:26 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:26.554557 | orchestrator | 2025-09-17 00:54:26 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:26.555010 | orchestrator | 2025-09-17 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:29.593096 | orchestrator | 2025-09-17 00:54:29 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:29.595158 | orchestrator | 2025-09-17 00:54:29 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:29.596767 | orchestrator | 2025-09-17 00:54:29 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:29.596793 | orchestrator | 2025-09-17 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:32.627976 | orchestrator | 2025-09-17 00:54:32 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:32.630143 | orchestrator | 2025-09-17 00:54:32 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:32.631081 | orchestrator | 2025-09-17 00:54:32 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:32.631113 | orchestrator | 2025-09-17 00:54:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:35.676252 | orchestrator | 2025-09-17 00:54:35 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:35.678166 | orchestrator | 2025-09-17 00:54:35 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:35.679959 | orchestrator | 2025-09-17 00:54:35 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:35.680222 | orchestrator | 2025-09-17 00:54:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:38.727958 | orchestrator | 2025-09-17 00:54:38 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:38.728765 | orchestrator | 2025-09-17 00:54:38 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:38.730349 | orchestrator | 2025-09-17 00:54:38 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:38.730374 | orchestrator | 2025-09-17 00:54:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:41.778486 | orchestrator | 2025-09-17 00:54:41 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:41.780190 | orchestrator | 2025-09-17 00:54:41 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:41.782576 | orchestrator | 2025-09-17 00:54:41 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:41.782664 | orchestrator | 2025-09-17 00:54:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:44.826584 | orchestrator | 2025-09-17 00:54:44 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:44.828199 | orchestrator | 2025-09-17 00:54:44 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:44.829559 | orchestrator | 2025-09-17 00:54:44 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:44.829592 | orchestrator | 2025-09-17 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:47.873341 | orchestrator | 2025-09-17 00:54:47 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:47.874967 | orchestrator | 2025-09-17 00:54:47 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:47.876465 | orchestrator | 2025-09-17 00:54:47 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:47.876489 | orchestrator | 2025-09-17 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:50.920964 | orchestrator | 2025-09-17 00:54:50 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:50.922322 | orchestrator | 2025-09-17 00:54:50 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:50.924746 | orchestrator | 2025-09-17 00:54:50 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:50.924777 | orchestrator | 2025-09-17 00:54:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:53.971109 | orchestrator | 2025-09-17 00:54:53 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state STARTED 2025-09-17 00:54:53.971663 | orchestrator | 2025-09-17 00:54:53 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:53.973426 | orchestrator | 2025-09-17 00:54:53 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:53.973780 | orchestrator | 2025-09-17 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:54:57.029865 | orchestrator | 2025-09-17 00:54:57 | INFO  | Task b6d4827c-52a1-4eff-9698-9b32a11f3c48 is in state SUCCESS 2025-09-17 00:54:57.031567 | orchestrator | 2025-09-17 00:54:57.031619 | orchestrator | 2025-09-17 00:54:57.031633 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-17 00:54:57.031645 | orchestrator | 2025-09-17 00:54:57.031656 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-17 00:54:57.031668 | orchestrator | Wednesday 17 September 2025 00:52:46 +0000 (0:00:00.590) 0:00:00.590 *** 2025-09-17 00:54:57.031679 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:54:57.031692 | orchestrator | 2025-09-17 00:54:57.031703 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-17 00:54:57.031713 | orchestrator | Wednesday 17 September 2025 00:52:47 +0000 (0:00:00.617) 0:00:01.208 *** 2025-09-17 00:54:57.031724 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.031736 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.031747 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.031758 | orchestrator | 2025-09-17 00:54:57.031769 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-17 00:54:57.031780 | orchestrator | Wednesday 17 September 2025 00:52:47 +0000 (0:00:00.626) 0:00:01.834 *** 2025-09-17 00:54:57.031790 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.031801 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.031812 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.031823 | orchestrator | 2025-09-17 00:54:57.031834 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-17 00:54:57.031844 | orchestrator | Wednesday 17 September 2025 00:52:48 +0000 (0:00:00.283) 0:00:02.118 *** 2025-09-17 00:54:57.031855 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.031866 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.031876 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.031887 | orchestrator | 2025-09-17 00:54:57.031898 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-17 00:54:57.031910 | orchestrator | Wednesday 17 September 2025 00:52:48 +0000 (0:00:00.769) 0:00:02.888 *** 2025-09-17 00:54:57.032062 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.032075 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.032086 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.032096 | orchestrator | 2025-09-17 00:54:57.032107 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-17 00:54:57.032118 | orchestrator | Wednesday 17 September 2025 00:52:49 +0000 (0:00:00.289) 0:00:03.178 *** 2025-09-17 00:54:57.032129 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.032140 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.032150 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.032161 | orchestrator | 2025-09-17 00:54:57.032188 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-17 00:54:57.032199 | orchestrator | Wednesday 17 September 2025 00:52:49 +0000 (0:00:00.295) 0:00:03.473 *** 2025-09-17 00:54:57.032210 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.032221 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.032232 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.032242 | orchestrator | 2025-09-17 00:54:57.032253 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-17 00:54:57.032264 | orchestrator | Wednesday 17 September 2025 00:52:49 +0000 (0:00:00.297) 0:00:03.771 *** 2025-09-17 00:54:57.032276 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.032287 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.032298 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.032309 | orchestrator | 2025-09-17 00:54:57.032319 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-17 00:54:57.032330 | orchestrator | Wednesday 17 September 2025 00:52:50 +0000 (0:00:00.461) 0:00:04.232 *** 2025-09-17 00:54:57.032341 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.032352 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.032363 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.032373 | orchestrator | 2025-09-17 00:54:57.032399 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-17 00:54:57.032411 | orchestrator | Wednesday 17 September 2025 00:52:50 +0000 (0:00:00.287) 0:00:04.519 *** 2025-09-17 00:54:57.032422 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 00:54:57.032432 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 00:54:57.032443 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 00:54:57.032454 | orchestrator | 2025-09-17 00:54:57.032465 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-17 00:54:57.032475 | orchestrator | Wednesday 17 September 2025 00:52:51 +0000 (0:00:00.601) 0:00:05.120 *** 2025-09-17 00:54:57.032486 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.032497 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.032507 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.032518 | orchestrator | 2025-09-17 00:54:57.032529 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-17 00:54:57.032540 | orchestrator | Wednesday 17 September 2025 00:52:51 +0000 (0:00:00.392) 0:00:05.513 *** 2025-09-17 00:54:57.032551 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 00:54:57.032561 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 00:54:57.032572 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 00:54:57.032583 | orchestrator | 2025-09-17 00:54:57.032593 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-17 00:54:57.032604 | orchestrator | Wednesday 17 September 2025 00:52:53 +0000 (0:00:02.219) 0:00:07.732 *** 2025-09-17 00:54:57.032615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-17 00:54:57.032626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-17 00:54:57.032637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-17 00:54:57.032648 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.032659 | orchestrator | 2025-09-17 00:54:57.032670 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-17 00:54:57.032724 | orchestrator | Wednesday 17 September 2025 00:52:54 +0000 (0:00:00.382) 0:00:08.114 *** 2025-09-17 00:54:57.032743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.032760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.032773 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.032786 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.032799 | orchestrator | 2025-09-17 00:54:57.032812 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-17 00:54:57.032825 | orchestrator | Wednesday 17 September 2025 00:52:54 +0000 (0:00:00.748) 0:00:08.863 *** 2025-09-17 00:54:57.032839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.032861 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.032882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.032896 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.032908 | orchestrator | 2025-09-17 00:54:57.032976 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-17 00:54:57.032990 | orchestrator | Wednesday 17 September 2025 00:52:55 +0000 (0:00:00.168) 0:00:09.032 *** 2025-09-17 00:54:57.033005 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e5c50ba141db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-17 00:52:52.178371', 'end': '2025-09-17 00:52:52.234053', 'delta': '0:00:00.055682', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e5c50ba141db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-17 00:54:57.033022 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '926a4774e3d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-17 00:52:52.940178', 'end': '2025-09-17 00:52:52.983835', 'delta': '0:00:00.043657', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['926a4774e3d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-17 00:54:57.033075 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '645a9428a529', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-17 00:52:53.533857', 'end': '2025-09-17 00:52:53.580761', 'delta': '0:00:00.046904', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['645a9428a529'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-17 00:54:57.033089 | orchestrator | 2025-09-17 00:54:57.033100 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-17 00:54:57.033111 | orchestrator | Wednesday 17 September 2025 00:52:55 +0000 (0:00:00.364) 0:00:09.396 *** 2025-09-17 00:54:57.033122 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.033132 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.033143 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.033153 | orchestrator | 2025-09-17 00:54:57.033164 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-17 00:54:57.033183 | orchestrator | Wednesday 17 September 2025 00:52:55 +0000 (0:00:00.427) 0:00:09.823 *** 2025-09-17 00:54:57.033194 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-17 00:54:57.033205 | orchestrator | 2025-09-17 00:54:57.033216 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-17 00:54:57.033226 | orchestrator | Wednesday 17 September 2025 00:52:57 +0000 (0:00:01.741) 0:00:11.565 *** 2025-09-17 00:54:57.033237 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033248 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.033258 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.033269 | orchestrator | 2025-09-17 00:54:57.033280 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-17 00:54:57.033290 | orchestrator | Wednesday 17 September 2025 00:52:57 +0000 (0:00:00.282) 0:00:11.847 *** 2025-09-17 00:54:57.033301 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033311 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.033322 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.033332 | orchestrator | 2025-09-17 00:54:57.033349 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-17 00:54:57.033360 | orchestrator | Wednesday 17 September 2025 00:52:58 +0000 (0:00:00.394) 0:00:12.241 *** 2025-09-17 00:54:57.033371 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033381 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.033392 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.033403 | orchestrator | 2025-09-17 00:54:57.033413 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-17 00:54:57.033424 | orchestrator | Wednesday 17 September 2025 00:52:58 +0000 (0:00:00.459) 0:00:12.701 *** 2025-09-17 00:54:57.033435 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.033445 | orchestrator | 2025-09-17 00:54:57.033456 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-17 00:54:57.033467 | orchestrator | Wednesday 17 September 2025 00:52:58 +0000 (0:00:00.136) 0:00:12.837 *** 2025-09-17 00:54:57.033477 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033488 | orchestrator | 2025-09-17 00:54:57.033498 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-17 00:54:57.033509 | orchestrator | Wednesday 17 September 2025 00:52:59 +0000 (0:00:00.228) 0:00:13.065 *** 2025-09-17 00:54:57.033520 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033530 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.033541 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.033552 | orchestrator | 2025-09-17 00:54:57.033562 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-17 00:54:57.033573 | orchestrator | Wednesday 17 September 2025 00:52:59 +0000 (0:00:00.268) 0:00:13.334 *** 2025-09-17 00:54:57.033584 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033594 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.033605 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.033615 | orchestrator | 2025-09-17 00:54:57.033626 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-17 00:54:57.033637 | orchestrator | Wednesday 17 September 2025 00:52:59 +0000 (0:00:00.312) 0:00:13.646 *** 2025-09-17 00:54:57.033647 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033658 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.033668 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.033679 | orchestrator | 2025-09-17 00:54:57.033689 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-17 00:54:57.033701 | orchestrator | Wednesday 17 September 2025 00:53:00 +0000 (0:00:00.465) 0:00:14.111 *** 2025-09-17 00:54:57.033711 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033722 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.033732 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.033743 | orchestrator | 2025-09-17 00:54:57.033754 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-17 00:54:57.033771 | orchestrator | Wednesday 17 September 2025 00:53:00 +0000 (0:00:00.346) 0:00:14.458 *** 2025-09-17 00:54:57.033781 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033792 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.033803 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.033813 | orchestrator | 2025-09-17 00:54:57.033824 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-17 00:54:57.033835 | orchestrator | Wednesday 17 September 2025 00:53:00 +0000 (0:00:00.312) 0:00:14.771 *** 2025-09-17 00:54:57.033845 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033856 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.033867 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.033877 | orchestrator | 2025-09-17 00:54:57.033888 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-17 00:54:57.033950 | orchestrator | Wednesday 17 September 2025 00:53:01 +0000 (0:00:00.384) 0:00:15.155 *** 2025-09-17 00:54:57.033964 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.033975 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.033985 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.034154 | orchestrator | 2025-09-17 00:54:57.034166 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-17 00:54:57.034178 | orchestrator | Wednesday 17 September 2025 00:53:01 +0000 (0:00:00.572) 0:00:15.728 *** 2025-09-17 00:54:57.034190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac-osd--block--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac', 'dm-uuid-LVM-McC2YUMR0tAmxxPtPELmePGU9mXFtjqgGMi3eXu9ExMPGx9GB5MWg6FIUhVyKJBC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15-osd--block--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15', 'dm-uuid-LVM-5tViTeBQ8Oc8FV55WuseHuulgx8yDHyMvxg1WaUVs60eWQXhd242ptbzYumv4J0O'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part1', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part14', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part15', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part16', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac-osd--block--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WSmpyP-VGDL-Cazr-wGD7-fLQw-LGiy-vjHBIz', 'scsi-0QEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f', 'scsi-SQEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15-osd--block--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UQTxLh-k3Ne-eFGo-NHuZ-hAu5-qrVj-eddquS', 'scsi-0QEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb', 'scsi-SQEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a', 'scsi-SQEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d-osd--block--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d', 'dm-uuid-LVM-r0ALozGhR2L6c4c7HnSkc1ujfUDnirHj3dZxSBnBOJf5ffjzIaJx0xH3iSe13R5t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1158166--3610--5fc1--bd8e--5288705939fa-osd--block--d1158166--3610--5fc1--bd8e--5288705939fa', 'dm-uuid-LVM-dR6rM0Kg4Yk1klH1e3rZpEqV5UEKMyR8IP6ZgT2lRWoV3IzM5QJnxvF0tIC8kqPz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034568 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.034579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2dc6576b--ad92--58b3--afc8--22b8ce20905e-osd--block--2dc6576b--ad92--58b3--afc8--22b8ce20905e', 'dm-uuid-LVM-pg03lch3KeYFVodEW4yidR22kwuRJWf4FMzgfXPuysxufP7dxlXYlkXK1PxX2k6x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part1', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part14', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part15', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part16', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d-osd--block--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QzUXag-mzmG-28Du-zQhp-kWL6-8Jlr-5JkD4t', 'scsi-0QEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4', 'scsi-SQEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7b5a8de--6218--5c80--971a--bac3422a4161-osd--block--a7b5a8de--6218--5c80--971a--bac3422a4161', 'dm-uuid-LVM-rAcXXceLpCWNdtel1qhxoK03BK36ONz0uhiweTSt4wUIKPoTcUAf36ISGrRTjdlw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d1158166--3610--5fc1--bd8e--5288705939fa-osd--block--d1158166--3610--5fc1--bd8e--5288705939fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PyuIpC-udvS-pGEe-yyK7-PyS9-dhMf-1dMXyQ', 'scsi-0QEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d', 'scsi-SQEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690', 'scsi-SQEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034827 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.034844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 00:54:57.034941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part1', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part14', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part15', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part16', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2dc6576b--ad92--58b3--afc8--22b8ce20905e-osd--block--2dc6576b--ad92--58b3--afc8--22b8ce20905e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w8FV3c-Boa2-FlG3-ELoA-Z810-NCtv-GGfCh5', 'scsi-0QEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9', 'scsi-SQEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a7b5a8de--6218--5c80--971a--bac3422a4161-osd--block--a7b5a8de--6218--5c80--971a--bac3422a4161'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i96O0I-6ZhJ-pW7N-qvO4-FBsN-X9RC-YPRKiL', 'scsi-0QEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e', 'scsi-SQEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.034991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7', 'scsi-SQEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.035009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 00:54:57.035021 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.035032 | orchestrator | 2025-09-17 00:54:57.035043 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-17 00:54:57.035054 | orchestrator | Wednesday 17 September 2025 00:53:02 +0000 (0:00:00.573) 0:00:16.301 *** 2025-09-17 00:54:57.035065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac-osd--block--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac', 'dm-uuid-LVM-McC2YUMR0tAmxxPtPELmePGU9mXFtjqgGMi3eXu9ExMPGx9GB5MWg6FIUhVyKJBC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15-osd--block--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15', 'dm-uuid-LVM-5tViTeBQ8Oc8FV55WuseHuulgx8yDHyMvxg1WaUVs60eWQXhd242ptbzYumv4J0O'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035141 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035164 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d-osd--block--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d', 'dm-uuid-LVM-r0ALozGhR2L6c4c7HnSkc1ujfUDnirHj3dZxSBnBOJf5ffjzIaJx0xH3iSe13R5t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035186 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1158166--3610--5fc1--bd8e--5288705939fa-osd--block--d1158166--3610--5fc1--bd8e--5288705939fa', 'dm-uuid-LVM-dR6rM0Kg4Yk1klH1e3rZpEqV5UEKMyR8IP6ZgT2lRWoV3IzM5QJnxvF0tIC8kqPz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035228 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035276 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part1', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part14', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part15', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part16', 'scsi-SQEMU_QEMU_HARDDISK_e64b4021-8d2d-4c49-b067-c44086593130-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035296 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac-osd--block--3f2c044b--dfa5--5506--ae92--c5b86c73e5ac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WSmpyP-VGDL-Cazr-wGD7-fLQw-LGiy-vjHBIz', 'scsi-0QEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f', 'scsi-SQEMU_QEMU_HARDDISK_03b82624-b2d4-4492-aa08-93320337b68f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035334 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035346 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15-osd--block--fe66c6e3--4f85--5e6e--b974--d8af1fb98b15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UQTxLh-k3Ne-eFGo-NHuZ-hAu5-qrVj-eddquS', 'scsi-0QEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb', 'scsi-SQEMU_QEMU_HARDDISK_6f825aad-5321-4538-8ab0-212b689e74fb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035358 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035377 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a', 'scsi-SQEMU_QEMU_HARDDISK_23efb5f1-23e4-4ac0-ae6c-f5e9dc9da96a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035389 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035418 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.035435 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035446 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035467 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part1', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part14', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part15', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part16', 'scsi-SQEMU_QEMU_HARDDISK_accea0c0-dd19-4395-8ed0-8cd720a4863e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035486 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2dc6576b--ad92--58b3--afc8--22b8ce20905e-osd--block--2dc6576b--ad92--58b3--afc8--22b8ce20905e', 'dm-uuid-LVM-pg03lch3KeYFVodEW4yidR22kwuRJWf4FMzgfXPuysxufP7dxlXYlkXK1PxX2k6x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d-osd--block--f65d6451--63aa--5ff6--99b4--c6c20cacdd2d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QzUXag-mzmG-28Du-zQhp-kWL6-8Jlr-5JkD4t', 'scsi-0QEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4', 'scsi-SQEMU_QEMU_HARDDISK_47b64ee5-5944-488f-91ba-80947343c2c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035514 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7b5a8de--6218--5c80--971a--bac3422a4161-osd--block--a7b5a8de--6218--5c80--971a--bac3422a4161', 'dm-uuid-LVM-rAcXXceLpCWNdtel1qhxoK03BK36ONz0uhiweTSt4wUIKPoTcUAf36ISGrRTjdlw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035532 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d1158166--3610--5fc1--bd8e--5288705939fa-osd--block--d1158166--3610--5fc1--bd8e--5288705939fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PyuIpC-udvS-pGEe-yyK7-PyS9-dhMf-1dMXyQ', 'scsi-0QEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d', 'scsi-SQEMU_QEMU_HARDDISK_69134018-d148-466a-9d44-263112a1226d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035544 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035566 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690', 'scsi-SQEMU_QEMU_HARDDISK_34b516b0-60cf-4ba1-b912-e488bac04690'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035578 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035601 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035612 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.035630 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035659 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035675 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035687 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part1', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part14', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part15', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part16', 'scsi-SQEMU_QEMU_HARDDISK_14e35ba1-2869-4981-bf2a-53888936c571-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2dc6576b--ad92--58b3--afc8--22b8ce20905e-osd--block--2dc6576b--ad92--58b3--afc8--22b8ce20905e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w8FV3c-Boa2-FlG3-ELoA-Z810-NCtv-GGfCh5', 'scsi-0QEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9', 'scsi-SQEMU_QEMU_HARDDISK_833e18f8-a2f7-4c8c-b617-8f83ac55bde9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035743 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a7b5a8de--6218--5c80--971a--bac3422a4161-osd--block--a7b5a8de--6218--5c80--971a--bac3422a4161'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i96O0I-6ZhJ-pW7N-qvO4-FBsN-X9RC-YPRKiL', 'scsi-0QEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e', 'scsi-SQEMU_QEMU_HARDDISK_6d2e8bc3-4c44-4e8e-a645-39611fbfc66e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035754 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7', 'scsi-SQEMU_QEMU_HARDDISK_922621dd-972b-4e9a-bc9e-e1e44ba503f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035774 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 00:54:57.035792 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.035803 | orchestrator | 2025-09-17 00:54:57.035814 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-17 00:54:57.035825 | orchestrator | Wednesday 17 September 2025 00:53:02 +0000 (0:00:00.561) 0:00:16.863 *** 2025-09-17 00:54:57.035836 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.035847 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.035858 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.035869 | orchestrator | 2025-09-17 00:54:57.035880 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-17 00:54:57.035890 | orchestrator | Wednesday 17 September 2025 00:53:03 +0000 (0:00:00.701) 0:00:17.564 *** 2025-09-17 00:54:57.035901 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.035964 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.035977 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.035988 | orchestrator | 2025-09-17 00:54:57.035999 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-17 00:54:57.036010 | orchestrator | Wednesday 17 September 2025 00:53:04 +0000 (0:00:00.489) 0:00:18.054 *** 2025-09-17 00:54:57.036021 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.036031 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.036042 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.036052 | orchestrator | 2025-09-17 00:54:57.036063 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-17 00:54:57.036074 | orchestrator | Wednesday 17 September 2025 00:53:04 +0000 (0:00:00.643) 0:00:18.697 *** 2025-09-17 00:54:57.036085 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.036096 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.036106 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.036117 | orchestrator | 2025-09-17 00:54:57.036128 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-17 00:54:57.036138 | orchestrator | Wednesday 17 September 2025 00:53:04 +0000 (0:00:00.293) 0:00:18.990 *** 2025-09-17 00:54:57.036149 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.036160 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.036170 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.036181 | orchestrator | 2025-09-17 00:54:57.036197 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-17 00:54:57.036207 | orchestrator | Wednesday 17 September 2025 00:53:05 +0000 (0:00:00.397) 0:00:19.388 *** 2025-09-17 00:54:57.036217 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.036226 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.036236 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.036245 | orchestrator | 2025-09-17 00:54:57.036255 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-17 00:54:57.036264 | orchestrator | Wednesday 17 September 2025 00:53:05 +0000 (0:00:00.501) 0:00:19.890 *** 2025-09-17 00:54:57.036274 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-17 00:54:57.036284 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-17 00:54:57.036294 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-17 00:54:57.036303 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-17 00:54:57.036313 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-17 00:54:57.036322 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-17 00:54:57.036331 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-17 00:54:57.036341 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-17 00:54:57.036351 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-17 00:54:57.036360 | orchestrator | 2025-09-17 00:54:57.036370 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-17 00:54:57.036385 | orchestrator | Wednesday 17 September 2025 00:53:06 +0000 (0:00:00.878) 0:00:20.768 *** 2025-09-17 00:54:57.036395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-17 00:54:57.036405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-17 00:54:57.036414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-17 00:54:57.036423 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.036433 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-17 00:54:57.036442 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-17 00:54:57.036451 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-17 00:54:57.036461 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.036470 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-17 00:54:57.036480 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-17 00:54:57.036489 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-17 00:54:57.036499 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.036508 | orchestrator | 2025-09-17 00:54:57.036517 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-17 00:54:57.036527 | orchestrator | Wednesday 17 September 2025 00:53:07 +0000 (0:00:00.344) 0:00:21.112 *** 2025-09-17 00:54:57.036537 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 00:54:57.036546 | orchestrator | 2025-09-17 00:54:57.036556 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-17 00:54:57.036567 | orchestrator | Wednesday 17 September 2025 00:53:07 +0000 (0:00:00.800) 0:00:21.913 *** 2025-09-17 00:54:57.036577 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.036586 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.036596 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.036605 | orchestrator | 2025-09-17 00:54:57.036621 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-17 00:54:57.036631 | orchestrator | Wednesday 17 September 2025 00:53:08 +0000 (0:00:00.372) 0:00:22.286 *** 2025-09-17 00:54:57.036641 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.036650 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.036660 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.036669 | orchestrator | 2025-09-17 00:54:57.036678 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-17 00:54:57.036688 | orchestrator | Wednesday 17 September 2025 00:53:08 +0000 (0:00:00.312) 0:00:22.599 *** 2025-09-17 00:54:57.036697 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.036707 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.036716 | orchestrator | skipping: [testbed-node-5] 2025-09-17 00:54:57.036726 | orchestrator | 2025-09-17 00:54:57.036735 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-17 00:54:57.036745 | orchestrator | Wednesday 17 September 2025 00:53:08 +0000 (0:00:00.401) 0:00:23.001 *** 2025-09-17 00:54:57.036754 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.036764 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.036773 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.036783 | orchestrator | 2025-09-17 00:54:57.036792 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-17 00:54:57.036802 | orchestrator | Wednesday 17 September 2025 00:53:09 +0000 (0:00:00.689) 0:00:23.690 *** 2025-09-17 00:54:57.036811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:54:57.036821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:54:57.036830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:54:57.036840 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.036850 | orchestrator | 2025-09-17 00:54:57.036859 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-17 00:54:57.036874 | orchestrator | Wednesday 17 September 2025 00:53:10 +0000 (0:00:00.376) 0:00:24.066 *** 2025-09-17 00:54:57.036884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:54:57.036893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:54:57.036903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:54:57.036927 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.036937 | orchestrator | 2025-09-17 00:54:57.036946 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-17 00:54:57.036963 | orchestrator | Wednesday 17 September 2025 00:53:10 +0000 (0:00:00.375) 0:00:24.441 *** 2025-09-17 00:54:57.036973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 00:54:57.036983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 00:54:57.036992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 00:54:57.037002 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.037011 | orchestrator | 2025-09-17 00:54:57.037021 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-17 00:54:57.037030 | orchestrator | Wednesday 17 September 2025 00:53:10 +0000 (0:00:00.388) 0:00:24.830 *** 2025-09-17 00:54:57.037040 | orchestrator | ok: [testbed-node-3] 2025-09-17 00:54:57.037049 | orchestrator | ok: [testbed-node-4] 2025-09-17 00:54:57.037059 | orchestrator | ok: [testbed-node-5] 2025-09-17 00:54:57.037068 | orchestrator | 2025-09-17 00:54:57.037078 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-17 00:54:57.037088 | orchestrator | Wednesday 17 September 2025 00:53:11 +0000 (0:00:00.320) 0:00:25.151 *** 2025-09-17 00:54:57.037097 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-17 00:54:57.037107 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-17 00:54:57.037116 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-17 00:54:57.037126 | orchestrator | 2025-09-17 00:54:57.037135 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-17 00:54:57.037145 | orchestrator | Wednesday 17 September 2025 00:53:11 +0000 (0:00:00.493) 0:00:25.645 *** 2025-09-17 00:54:57.037155 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 00:54:57.037164 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 00:54:57.037174 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 00:54:57.037183 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-17 00:54:57.037193 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-17 00:54:57.037202 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-17 00:54:57.037212 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-17 00:54:57.037221 | orchestrator | 2025-09-17 00:54:57.037231 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-17 00:54:57.037241 | orchestrator | Wednesday 17 September 2025 00:53:12 +0000 (0:00:01.093) 0:00:26.738 *** 2025-09-17 00:54:57.037250 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 00:54:57.037260 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 00:54:57.037269 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 00:54:57.037279 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-17 00:54:57.037288 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-17 00:54:57.037298 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-17 00:54:57.037308 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-17 00:54:57.037323 | orchestrator | 2025-09-17 00:54:57.037338 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-17 00:54:57.037348 | orchestrator | Wednesday 17 September 2025 00:53:14 +0000 (0:00:01.928) 0:00:28.666 *** 2025-09-17 00:54:57.037357 | orchestrator | skipping: [testbed-node-3] 2025-09-17 00:54:57.037367 | orchestrator | skipping: [testbed-node-4] 2025-09-17 00:54:57.037376 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-17 00:54:57.037386 | orchestrator | 2025-09-17 00:54:57.037395 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-17 00:54:57.037404 | orchestrator | Wednesday 17 September 2025 00:53:15 +0000 (0:00:00.430) 0:00:29.097 *** 2025-09-17 00:54:57.037415 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 00:54:57.037425 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 00:54:57.037435 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 00:54:57.037446 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 00:54:57.037460 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 00:54:57.037470 | orchestrator | 2025-09-17 00:54:57.037480 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-17 00:54:57.037489 | orchestrator | Wednesday 17 September 2025 00:54:00 +0000 (0:00:45.566) 0:01:14.663 *** 2025-09-17 00:54:57.037499 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037509 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037518 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037528 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037537 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037547 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037556 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-17 00:54:57.037566 | orchestrator | 2025-09-17 00:54:57.037575 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-17 00:54:57.037585 | orchestrator | Wednesday 17 September 2025 00:54:25 +0000 (0:00:25.152) 0:01:39.815 *** 2025-09-17 00:54:57.037595 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037604 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037613 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037623 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037638 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037648 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037658 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 00:54:57.037667 | orchestrator | 2025-09-17 00:54:57.037677 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-17 00:54:57.037686 | orchestrator | Wednesday 17 September 2025 00:54:38 +0000 (0:00:12.373) 0:01:52.189 *** 2025-09-17 00:54:57.037696 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037705 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 00:54:57.037715 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 00:54:57.037724 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037734 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 00:54:57.037744 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 00:54:57.037759 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037769 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 00:54:57.037779 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 00:54:57.037789 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037798 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 00:54:57.037808 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 00:54:57.037817 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037827 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 00:54:57.037836 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 00:54:57.037845 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 00:54:57.037855 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 00:54:57.037864 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 00:54:57.037874 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-17 00:54:57.037884 | orchestrator | 2025-09-17 00:54:57.037893 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:54:57.037903 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-17 00:54:57.037931 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-17 00:54:57.037942 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-17 00:54:57.037951 | orchestrator | 2025-09-17 00:54:57.037961 | orchestrator | 2025-09-17 00:54:57.037970 | orchestrator | 2025-09-17 00:54:57.037980 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:54:57.037994 | orchestrator | Wednesday 17 September 2025 00:54:56 +0000 (0:00:17.954) 0:02:10.144 *** 2025-09-17 00:54:57.038004 | orchestrator | =============================================================================== 2025-09-17 00:54:57.038013 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.57s 2025-09-17 00:54:57.038052 | orchestrator | generate keys ---------------------------------------------------------- 25.15s 2025-09-17 00:54:57.038062 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.96s 2025-09-17 00:54:57.038078 | orchestrator | get keys from monitors ------------------------------------------------- 12.37s 2025-09-17 00:54:57.038088 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.22s 2025-09-17 00:54:57.038097 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.93s 2025-09-17 00:54:57.038107 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.74s 2025-09-17 00:54:57.038116 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.09s 2025-09-17 00:54:57.038126 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.88s 2025-09-17 00:54:57.038136 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.80s 2025-09-17 00:54:57.038146 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.77s 2025-09-17 00:54:57.038155 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.75s 2025-09-17 00:54:57.038165 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2025-09-17 00:54:57.038174 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.69s 2025-09-17 00:54:57.038184 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-09-17 00:54:57.038193 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.63s 2025-09-17 00:54:57.038203 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2025-09-17 00:54:57.038213 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2025-09-17 00:54:57.038222 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.57s 2025-09-17 00:54:57.038232 | orchestrator | ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks --- 0.57s 2025-09-17 00:54:57.038241 | orchestrator | 2025-09-17 00:54:57 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:54:57.038251 | orchestrator | 2025-09-17 00:54:57 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:54:57.038261 | orchestrator | 2025-09-17 00:54:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:00.085540 | orchestrator | 2025-09-17 00:55:00 | INFO  | Task e3b4c1a6-467d-463e-b988-7d90431ea6ee is in state STARTED 2025-09-17 00:55:00.087832 | orchestrator | 2025-09-17 00:55:00 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:55:00.090092 | orchestrator | 2025-09-17 00:55:00 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:00.090350 | orchestrator | 2025-09-17 00:55:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:03.134368 | orchestrator | 2025-09-17 00:55:03 | INFO  | Task e3b4c1a6-467d-463e-b988-7d90431ea6ee is in state STARTED 2025-09-17 00:55:03.136299 | orchestrator | 2025-09-17 00:55:03 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:55:03.137782 | orchestrator | 2025-09-17 00:55:03 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:03.137884 | orchestrator | 2025-09-17 00:55:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:06.184205 | orchestrator | 2025-09-17 00:55:06 | INFO  | Task e3b4c1a6-467d-463e-b988-7d90431ea6ee is in state STARTED 2025-09-17 00:55:06.186374 | orchestrator | 2025-09-17 00:55:06 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:55:06.188332 | orchestrator | 2025-09-17 00:55:06 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:06.188353 | orchestrator | 2025-09-17 00:55:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:09.226155 | orchestrator | 2025-09-17 00:55:09 | INFO  | Task e3b4c1a6-467d-463e-b988-7d90431ea6ee is in state STARTED 2025-09-17 00:55:09.228234 | orchestrator | 2025-09-17 00:55:09 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:55:09.228938 | orchestrator | 2025-09-17 00:55:09 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:09.229199 | orchestrator | 2025-09-17 00:55:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:12.289151 | orchestrator | 2025-09-17 00:55:12 | INFO  | Task e3b4c1a6-467d-463e-b988-7d90431ea6ee is in state STARTED 2025-09-17 00:55:12.291440 | orchestrator | 2025-09-17 00:55:12 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:55:12.294558 | orchestrator | 2025-09-17 00:55:12 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:12.294623 | orchestrator | 2025-09-17 00:55:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:15.350253 | orchestrator | 2025-09-17 00:55:15 | INFO  | Task e3b4c1a6-467d-463e-b988-7d90431ea6ee is in state STARTED 2025-09-17 00:55:15.351689 | orchestrator | 2025-09-17 00:55:15 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:55:15.353389 | orchestrator | 2025-09-17 00:55:15 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:15.353416 | orchestrator | 2025-09-17 00:55:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:18.400516 | orchestrator | 2025-09-17 00:55:18 | INFO  | Task e3b4c1a6-467d-463e-b988-7d90431ea6ee is in state STARTED 2025-09-17 00:55:18.402594 | orchestrator | 2025-09-17 00:55:18 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state STARTED 2025-09-17 00:55:18.409707 | orchestrator | 2025-09-17 00:55:18 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:18.409743 | orchestrator | 2025-09-17 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:21.450238 | orchestrator | 2025-09-17 00:55:21 | INFO  | Task e3b4c1a6-467d-463e-b988-7d90431ea6ee is in state STARTED 2025-09-17 00:55:21.453744 | orchestrator | 2025-09-17 00:55:21 | INFO  | Task 5d684c66-5aa1-4373-9cde-eb1f58281a80 is in state SUCCESS 2025-09-17 00:55:21.455648 | orchestrator | 2025-09-17 00:55:21.455709 | orchestrator | 2025-09-17 00:55:21.455725 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:55:21.455737 | orchestrator | 2025-09-17 00:55:21.455748 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:55:21.455760 | orchestrator | Wednesday 17 September 2025 00:53:39 +0000 (0:00:00.288) 0:00:00.288 *** 2025-09-17 00:55:21.455772 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.455784 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.455795 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.455805 | orchestrator | 2025-09-17 00:55:21.455816 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:55:21.455827 | orchestrator | Wednesday 17 September 2025 00:53:39 +0000 (0:00:00.311) 0:00:00.600 *** 2025-09-17 00:55:21.455838 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-17 00:55:21.455849 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-17 00:55:21.455860 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-17 00:55:21.455871 | orchestrator | 2025-09-17 00:55:21.455881 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-17 00:55:21.455892 | orchestrator | 2025-09-17 00:55:21.455903 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-17 00:55:21.455959 | orchestrator | Wednesday 17 September 2025 00:53:40 +0000 (0:00:00.477) 0:00:01.077 *** 2025-09-17 00:55:21.455974 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:55:21.456013 | orchestrator | 2025-09-17 00:55:21.456121 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-17 00:55:21.456137 | orchestrator | Wednesday 17 September 2025 00:53:40 +0000 (0:00:00.535) 0:00:01.612 *** 2025-09-17 00:55:21.456170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:55:21.456290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:55:21.456331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:55:21.456345 | orchestrator | 2025-09-17 00:55:21.456356 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-17 00:55:21.456367 | orchestrator | Wednesday 17 September 2025 00:53:41 +0000 (0:00:01.147) 0:00:02.759 *** 2025-09-17 00:55:21.456378 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.456390 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.456400 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.456411 | orchestrator | 2025-09-17 00:55:21.456422 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-17 00:55:21.456433 | orchestrator | Wednesday 17 September 2025 00:53:42 +0000 (0:00:00.557) 0:00:03.317 *** 2025-09-17 00:55:21.456444 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-17 00:55:21.456463 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-17 00:55:21.456474 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-17 00:55:21.456485 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-17 00:55:21.456497 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-17 00:55:21.456507 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-17 00:55:21.456525 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-17 00:55:21.456535 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-17 00:55:21.456546 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-17 00:55:21.456556 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-17 00:55:21.456567 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-17 00:55:21.456578 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-17 00:55:21.456588 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-17 00:55:21.456599 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-17 00:55:21.456609 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-17 00:55:21.456620 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-17 00:55:21.456631 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-17 00:55:21.456641 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-17 00:55:21.456657 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-17 00:55:21.456675 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-17 00:55:21.456693 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-17 00:55:21.456710 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-17 00:55:21.456728 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-17 00:55:21.456745 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-17 00:55:21.456765 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-17 00:55:21.456785 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-17 00:55:21.456803 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-17 00:55:21.456821 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-17 00:55:21.456848 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-17 00:55:21.456866 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-17 00:55:21.456886 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-17 00:55:21.456904 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-17 00:55:21.456948 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-17 00:55:21.456962 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-17 00:55:21.456974 | orchestrator | 2025-09-17 00:55:21.456997 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 00:55:21.457010 | orchestrator | Wednesday 17 September 2025 00:53:43 +0000 (0:00:00.751) 0:00:04.069 *** 2025-09-17 00:55:21.457022 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.457035 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.457047 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.457059 | orchestrator | 2025-09-17 00:55:21.457071 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 00:55:21.457084 | orchestrator | Wednesday 17 September 2025 00:53:43 +0000 (0:00:00.298) 0:00:04.367 *** 2025-09-17 00:55:21.457097 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.457109 | orchestrator | 2025-09-17 00:55:21.457131 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 00:55:21.457145 | orchestrator | Wednesday 17 September 2025 00:53:43 +0000 (0:00:00.149) 0:00:04.517 *** 2025-09-17 00:55:21.457157 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.457169 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.457180 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.457190 | orchestrator | 2025-09-17 00:55:21.457201 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 00:55:21.457212 | orchestrator | Wednesday 17 September 2025 00:53:44 +0000 (0:00:00.434) 0:00:04.951 *** 2025-09-17 00:55:21.457222 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.457233 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.457243 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.457254 | orchestrator | 2025-09-17 00:55:21.457265 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 00:55:21.457276 | orchestrator | Wednesday 17 September 2025 00:53:44 +0000 (0:00:00.314) 0:00:05.266 *** 2025-09-17 00:55:21.457286 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.457297 | orchestrator | 2025-09-17 00:55:21.457307 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 00:55:21.457318 | orchestrator | Wednesday 17 September 2025 00:53:44 +0000 (0:00:00.139) 0:00:05.405 *** 2025-09-17 00:55:21.457329 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.457339 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.457350 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.457360 | orchestrator | 2025-09-17 00:55:21.457371 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 00:55:21.457382 | orchestrator | Wednesday 17 September 2025 00:53:44 +0000 (0:00:00.282) 0:00:05.687 *** 2025-09-17 00:55:21.457392 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.457403 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.457414 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.457424 | orchestrator | 2025-09-17 00:55:21.457435 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 00:55:21.457445 | orchestrator | Wednesday 17 September 2025 00:53:45 +0000 (0:00:00.300) 0:00:05.988 *** 2025-09-17 00:55:21.457456 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.457467 | orchestrator | 2025-09-17 00:55:21.457477 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 00:55:21.457488 | orchestrator | Wednesday 17 September 2025 00:53:45 +0000 (0:00:00.125) 0:00:06.114 *** 2025-09-17 00:55:21.457498 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.457509 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.457520 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.457530 | orchestrator | 2025-09-17 00:55:21.457541 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 00:55:21.457551 | orchestrator | Wednesday 17 September 2025 00:53:45 +0000 (0:00:00.493) 0:00:06.607 *** 2025-09-17 00:55:21.457562 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.457573 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.457583 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.457594 | orchestrator | 2025-09-17 00:55:21.457605 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 00:55:21.457622 | orchestrator | Wednesday 17 September 2025 00:53:46 +0000 (0:00:00.316) 0:00:06.924 *** 2025-09-17 00:55:21.457633 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.457643 | orchestrator | 2025-09-17 00:55:21.457654 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 00:55:21.457665 | orchestrator | Wednesday 17 September 2025 00:53:46 +0000 (0:00:00.123) 0:00:07.047 *** 2025-09-17 00:55:21.457675 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.457686 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.457696 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.457707 | orchestrator | 2025-09-17 00:55:21.457718 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 00:55:21.457728 | orchestrator | Wednesday 17 September 2025 00:53:46 +0000 (0:00:00.319) 0:00:07.366 *** 2025-09-17 00:55:21.457739 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.457749 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.457765 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.457777 | orchestrator | 2025-09-17 00:55:21.457787 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 00:55:21.457798 | orchestrator | Wednesday 17 September 2025 00:53:46 +0000 (0:00:00.303) 0:00:07.670 *** 2025-09-17 00:55:21.457808 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.457819 | orchestrator | 2025-09-17 00:55:21.457830 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 00:55:21.457840 | orchestrator | Wednesday 17 September 2025 00:53:47 +0000 (0:00:00.310) 0:00:07.980 *** 2025-09-17 00:55:21.457859 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.457877 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.457895 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.457972 | orchestrator | 2025-09-17 00:55:21.457996 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 00:55:21.458076 | orchestrator | Wednesday 17 September 2025 00:53:47 +0000 (0:00:00.289) 0:00:08.270 *** 2025-09-17 00:55:21.458111 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.458130 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.458148 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.458159 | orchestrator | 2025-09-17 00:55:21.458170 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 00:55:21.458181 | orchestrator | Wednesday 17 September 2025 00:53:47 +0000 (0:00:00.294) 0:00:08.564 *** 2025-09-17 00:55:21.458192 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.458202 | orchestrator | 2025-09-17 00:55:21.458213 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 00:55:21.458223 | orchestrator | Wednesday 17 September 2025 00:53:47 +0000 (0:00:00.122) 0:00:08.687 *** 2025-09-17 00:55:21.458234 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.458244 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.458255 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.458265 | orchestrator | 2025-09-17 00:55:21.458276 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 00:55:21.458297 | orchestrator | Wednesday 17 September 2025 00:53:48 +0000 (0:00:00.304) 0:00:08.991 *** 2025-09-17 00:55:21.458308 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.458319 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.458330 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.458340 | orchestrator | 2025-09-17 00:55:21.458351 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 00:55:21.458361 | orchestrator | Wednesday 17 September 2025 00:53:48 +0000 (0:00:00.494) 0:00:09.485 *** 2025-09-17 00:55:21.458372 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.458382 | orchestrator | 2025-09-17 00:55:21.458393 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 00:55:21.458403 | orchestrator | Wednesday 17 September 2025 00:53:48 +0000 (0:00:00.118) 0:00:09.604 *** 2025-09-17 00:55:21.458425 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.458435 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.458446 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.458457 | orchestrator | 2025-09-17 00:55:21.458468 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 00:55:21.458478 | orchestrator | Wednesday 17 September 2025 00:53:49 +0000 (0:00:00.315) 0:00:09.919 *** 2025-09-17 00:55:21.458489 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.458498 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.458508 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.458517 | orchestrator | 2025-09-17 00:55:21.458527 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 00:55:21.458536 | orchestrator | Wednesday 17 September 2025 00:53:49 +0000 (0:00:00.323) 0:00:10.243 *** 2025-09-17 00:55:21.458545 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.458555 | orchestrator | 2025-09-17 00:55:21.458564 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 00:55:21.458574 | orchestrator | Wednesday 17 September 2025 00:53:49 +0000 (0:00:00.137) 0:00:10.381 *** 2025-09-17 00:55:21.458583 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.458592 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.458602 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.458611 | orchestrator | 2025-09-17 00:55:21.458620 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 00:55:21.458630 | orchestrator | Wednesday 17 September 2025 00:53:49 +0000 (0:00:00.300) 0:00:10.681 *** 2025-09-17 00:55:21.458639 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.458649 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.458658 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.458667 | orchestrator | 2025-09-17 00:55:21.458677 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 00:55:21.458686 | orchestrator | Wednesday 17 September 2025 00:53:50 +0000 (0:00:00.602) 0:00:11.284 *** 2025-09-17 00:55:21.458696 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.458705 | orchestrator | 2025-09-17 00:55:21.458715 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 00:55:21.458724 | orchestrator | Wednesday 17 September 2025 00:53:50 +0000 (0:00:00.122) 0:00:11.406 *** 2025-09-17 00:55:21.458734 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.458743 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.458752 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.458762 | orchestrator | 2025-09-17 00:55:21.458771 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 00:55:21.458781 | orchestrator | Wednesday 17 September 2025 00:53:50 +0000 (0:00:00.309) 0:00:11.715 *** 2025-09-17 00:55:21.458790 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:55:21.458800 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:55:21.458809 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:55:21.458818 | orchestrator | 2025-09-17 00:55:21.458832 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 00:55:21.458849 | orchestrator | Wednesday 17 September 2025 00:53:51 +0000 (0:00:00.314) 0:00:12.030 *** 2025-09-17 00:55:21.458865 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.458880 | orchestrator | 2025-09-17 00:55:21.458896 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 00:55:21.458912 | orchestrator | Wednesday 17 September 2025 00:53:51 +0000 (0:00:00.135) 0:00:12.165 *** 2025-09-17 00:55:21.459080 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.459091 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.459101 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.459110 | orchestrator | 2025-09-17 00:55:21.459120 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-17 00:55:21.459129 | orchestrator | Wednesday 17 September 2025 00:53:51 +0000 (0:00:00.458) 0:00:12.623 *** 2025-09-17 00:55:21.459139 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:55:21.459165 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:55:21.459175 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:55:21.459185 | orchestrator | 2025-09-17 00:55:21.459194 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-17 00:55:21.459203 | orchestrator | Wednesday 17 September 2025 00:53:53 +0000 (0:00:01.757) 0:00:14.381 *** 2025-09-17 00:55:21.459213 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-17 00:55:21.459223 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-17 00:55:21.459233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-17 00:55:21.459242 | orchestrator | 2025-09-17 00:55:21.459252 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-17 00:55:21.459261 | orchestrator | Wednesday 17 September 2025 00:53:55 +0000 (0:00:01.949) 0:00:16.330 *** 2025-09-17 00:55:21.459268 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-17 00:55:21.459277 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-17 00:55:21.459285 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-17 00:55:21.459292 | orchestrator | 2025-09-17 00:55:21.459300 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-17 00:55:21.459317 | orchestrator | Wednesday 17 September 2025 00:53:57 +0000 (0:00:02.064) 0:00:18.395 *** 2025-09-17 00:55:21.459325 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-17 00:55:21.459333 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-17 00:55:21.459341 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-17 00:55:21.459349 | orchestrator | 2025-09-17 00:55:21.459356 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-17 00:55:21.459364 | orchestrator | Wednesday 17 September 2025 00:53:59 +0000 (0:00:02.091) 0:00:20.486 *** 2025-09-17 00:55:21.459372 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.459379 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.459387 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.459395 | orchestrator | 2025-09-17 00:55:21.459402 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-17 00:55:21.459410 | orchestrator | Wednesday 17 September 2025 00:53:59 +0000 (0:00:00.265) 0:00:20.751 *** 2025-09-17 00:55:21.459418 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.459437 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.459445 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.459461 | orchestrator | 2025-09-17 00:55:21.459469 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-17 00:55:21.459477 | orchestrator | Wednesday 17 September 2025 00:54:00 +0000 (0:00:00.252) 0:00:21.004 *** 2025-09-17 00:55:21.459485 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:55:21.459492 | orchestrator | 2025-09-17 00:55:21.459500 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-17 00:55:21.459508 | orchestrator | Wednesday 17 September 2025 00:54:00 +0000 (0:00:00.515) 0:00:21.519 *** 2025-09-17 00:55:21.459523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:55:21.459548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:55:21.459562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:55:21.459578 | orchestrator | 2025-09-17 00:55:21.459587 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-17 00:55:21.459595 | orchestrator | Wednesday 17 September 2025 00:54:02 +0000 (0:00:01.683) 0:00:23.203 *** 2025-09-17 00:55:21.459610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 00:55:21.459625 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.459643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 00:55:21.459652 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.459661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 00:55:21.459675 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.459683 | orchestrator | 2025-09-17 00:55:21.459691 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-17 00:55:21.459698 | orchestrator | Wednesday 17 September 2025 00:54:02 +0000 (0:00:00.579) 0:00:23.783 *** 2025-09-17 00:55:21.459717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 00:55:21.459726 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.459738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 00:55:21.459752 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.459767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 00:55:21.459777 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.459785 | orchestrator | 2025-09-17 00:55:21.459792 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-17 00:55:21.459800 | orchestrator | Wednesday 17 September 2025 00:54:03 +0000 (0:00:00.806) 0:00:24.589 *** 2025-09-17 00:55:21.459813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:55:21.459833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:55:21.459852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 00:55:21.459861 | orchestrator | 2025-09-17 00:55:21.459869 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-17 00:55:21.459877 | orchestrator | Wednesday 17 September 2025 00:54:05 +0000 (0:00:01.431) 0:00:26.020 *** 2025-09-17 00:55:21.459885 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:55:21.459893 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:55:21.459901 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:55:21.459908 | orchestrator | 2025-09-17 00:55:21.459960 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-17 00:55:21.459969 | orchestrator | Wednesday 17 September 2025 00:54:05 +0000 (0:00:00.281) 0:00:26.301 *** 2025-09-17 00:55:21.459976 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:55:21.459984 | orchestrator | 2025-09-17 00:55:21.459992 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-17 00:55:21.460005 | orchestrator | Wednesday 17 September 2025 00:54:05 +0000 (0:00:00.495) 0:00:26.797 *** 2025-09-17 00:55:21.460013 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:55:21.460021 | orchestrator | 2025-09-17 00:55:21.460029 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-17 00:55:21.460037 | orchestrator | Wednesday 17 September 2025 00:54:08 +0000 (0:00:02.402) 0:00:29.200 *** 2025-09-17 00:55:21.460045 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:55:21.460052 | orchestrator | 2025-09-17 00:55:21.460060 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-17 00:55:21.460068 | orchestrator | Wednesday 17 September 2025 00:54:10 +0000 (0:00:02.612) 0:00:31.812 *** 2025-09-17 00:55:21.460076 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:55:21.460092 | orchestrator | 2025-09-17 00:55:21.460100 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-17 00:55:21.460108 | orchestrator | Wednesday 17 September 2025 00:54:27 +0000 (0:00:16.231) 0:00:48.043 *** 2025-09-17 00:55:21.460116 | orchestrator | 2025-09-17 00:55:21.460124 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-17 00:55:21.460131 | orchestrator | Wednesday 17 September 2025 00:54:27 +0000 (0:00:00.059) 0:00:48.102 *** 2025-09-17 00:55:21.460139 | orchestrator | 2025-09-17 00:55:21.460147 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-17 00:55:21.460155 | orchestrator | Wednesday 17 September 2025 00:54:27 +0000 (0:00:00.060) 0:00:48.162 *** 2025-09-17 00:55:21.460163 | orchestrator | 2025-09-17 00:55:21.460170 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-17 00:55:21.460178 | orchestrator | Wednesday 17 September 2025 00:54:27 +0000 (0:00:00.061) 0:00:48.224 *** 2025-09-17 00:55:21.460186 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:55:21.460194 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:55:21.460202 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:55:21.460209 | orchestrator | 2025-09-17 00:55:21.460217 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:55:21.460225 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-17 00:55:21.460234 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-17 00:55:21.460242 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-17 00:55:21.460250 | orchestrator | 2025-09-17 00:55:21.460258 | orchestrator | 2025-09-17 00:55:21.460265 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:55:21.460273 | orchestrator | Wednesday 17 September 2025 00:55:19 +0000 (0:00:52.087) 0:01:40.311 *** 2025-09-17 00:55:21.460281 | orchestrator | =============================================================================== 2025-09-17 00:55:21.460289 | orchestrator | horizon : Restart horizon container ------------------------------------ 52.09s 2025-09-17 00:55:21.460297 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.23s 2025-09-17 00:55:21.460304 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.61s 2025-09-17 00:55:21.460312 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.40s 2025-09-17 00:55:21.460320 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.09s 2025-09-17 00:55:21.460328 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.06s 2025-09-17 00:55:21.460335 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.95s 2025-09-17 00:55:21.460343 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.76s 2025-09-17 00:55:21.460355 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.68s 2025-09-17 00:55:21.460363 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.43s 2025-09-17 00:55:21.460370 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.15s 2025-09-17 00:55:21.460378 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.81s 2025-09-17 00:55:21.460386 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-09-17 00:55:21.460393 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2025-09-17 00:55:21.460401 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.58s 2025-09-17 00:55:21.460409 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.56s 2025-09-17 00:55:21.460422 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-09-17 00:55:21.460429 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-09-17 00:55:21.460437 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2025-09-17 00:55:21.460445 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2025-09-17 00:55:21.460453 | orchestrator | 2025-09-17 00:55:21 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:21.460461 | orchestrator | 2025-09-17 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:24.498183 | orchestrator | 2025-09-17 00:55:24 | INFO  | Task e3b4c1a6-467d-463e-b988-7d90431ea6ee is in state STARTED 2025-09-17 00:55:24.500247 | orchestrator | 2025-09-17 00:55:24 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:24.500278 | orchestrator | 2025-09-17 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:27.557730 | orchestrator | 2025-09-17 00:55:27 | INFO  | Task e3b4c1a6-467d-463e-b988-7d90431ea6ee is in state SUCCESS 2025-09-17 00:55:27.558555 | orchestrator | 2025-09-17 00:55:27 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:27.560299 | orchestrator | 2025-09-17 00:55:27 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:27.560424 | orchestrator | 2025-09-17 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:30.613640 | orchestrator | 2025-09-17 00:55:30 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:30.615715 | orchestrator | 2025-09-17 00:55:30 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:30.615748 | orchestrator | 2025-09-17 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:33.657573 | orchestrator | 2025-09-17 00:55:33 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:33.658525 | orchestrator | 2025-09-17 00:55:33 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:33.658629 | orchestrator | 2025-09-17 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:36.698485 | orchestrator | 2025-09-17 00:55:36 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:36.701403 | orchestrator | 2025-09-17 00:55:36 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:36.701498 | orchestrator | 2025-09-17 00:55:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:39.738585 | orchestrator | 2025-09-17 00:55:39 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:39.739753 | orchestrator | 2025-09-17 00:55:39 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:39.739866 | orchestrator | 2025-09-17 00:55:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:42.775116 | orchestrator | 2025-09-17 00:55:42 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:42.776347 | orchestrator | 2025-09-17 00:55:42 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:42.776379 | orchestrator | 2025-09-17 00:55:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:45.817957 | orchestrator | 2025-09-17 00:55:45 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:45.819258 | orchestrator | 2025-09-17 00:55:45 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:45.819320 | orchestrator | 2025-09-17 00:55:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:48.869783 | orchestrator | 2025-09-17 00:55:48 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:48.872355 | orchestrator | 2025-09-17 00:55:48 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:48.872390 | orchestrator | 2025-09-17 00:55:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:51.907059 | orchestrator | 2025-09-17 00:55:51 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:51.908614 | orchestrator | 2025-09-17 00:55:51 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:51.908648 | orchestrator | 2025-09-17 00:55:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:54.961886 | orchestrator | 2025-09-17 00:55:54 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:54.963263 | orchestrator | 2025-09-17 00:55:54 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:54.963379 | orchestrator | 2025-09-17 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:55:58.007985 | orchestrator | 2025-09-17 00:55:58 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:55:58.009232 | orchestrator | 2025-09-17 00:55:58 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:55:58.009647 | orchestrator | 2025-09-17 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:01.055715 | orchestrator | 2025-09-17 00:56:01 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:56:01.056381 | orchestrator | 2025-09-17 00:56:01 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:56:01.056418 | orchestrator | 2025-09-17 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:04.101270 | orchestrator | 2025-09-17 00:56:04 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:56:04.102644 | orchestrator | 2025-09-17 00:56:04 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:56:04.102678 | orchestrator | 2025-09-17 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:07.148462 | orchestrator | 2025-09-17 00:56:07 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:56:07.150366 | orchestrator | 2025-09-17 00:56:07 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:56:07.150894 | orchestrator | 2025-09-17 00:56:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:10.198816 | orchestrator | 2025-09-17 00:56:10 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:56:10.201018 | orchestrator | 2025-09-17 00:56:10 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:56:10.201216 | orchestrator | 2025-09-17 00:56:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:13.250531 | orchestrator | 2025-09-17 00:56:13 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state STARTED 2025-09-17 00:56:13.252874 | orchestrator | 2025-09-17 00:56:13 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:56:13.253031 | orchestrator | 2025-09-17 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:16.302073 | orchestrator | 2025-09-17 00:56:16 | INFO  | Task 4c6ae4d6-7898-4cf8-a4dd-6d3bf41bd1c2 is in state SUCCESS 2025-09-17 00:56:16.304947 | orchestrator | 2025-09-17 00:56:16.304985 | orchestrator | 2025-09-17 00:56:16.304997 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-17 00:56:16.305008 | orchestrator | 2025-09-17 00:56:16.305017 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-17 00:56:16.305028 | orchestrator | Wednesday 17 September 2025 00:55:00 +0000 (0:00:00.156) 0:00:00.156 *** 2025-09-17 00:56:16.305038 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-17 00:56:16.305049 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-17 00:56:16.305058 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-17 00:56:16.305068 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 00:56:16.305077 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-17 00:56:16.305087 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-17 00:56:16.305096 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-17 00:56:16.305105 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-17 00:56:16.305128 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-17 00:56:16.305138 | orchestrator | 2025-09-17 00:56:16.305148 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-17 00:56:16.305157 | orchestrator | Wednesday 17 September 2025 00:55:05 +0000 (0:00:04.548) 0:00:04.704 *** 2025-09-17 00:56:16.305168 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-17 00:56:16.305177 | orchestrator | 2025-09-17 00:56:16.305187 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-17 00:56:16.305196 | orchestrator | Wednesday 17 September 2025 00:55:06 +0000 (0:00:01.025) 0:00:05.729 *** 2025-09-17 00:56:16.305206 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-17 00:56:16.305215 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-17 00:56:16.305225 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-17 00:56:16.305235 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 00:56:16.305244 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-17 00:56:16.305254 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-17 00:56:16.305263 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-17 00:56:16.305273 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-17 00:56:16.305282 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-17 00:56:16.305291 | orchestrator | 2025-09-17 00:56:16.305301 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-17 00:56:16.305311 | orchestrator | Wednesday 17 September 2025 00:55:19 +0000 (0:00:12.782) 0:00:18.511 *** 2025-09-17 00:56:16.305320 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-17 00:56:16.305330 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-17 00:56:16.305339 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-17 00:56:16.305349 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 00:56:16.305359 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-17 00:56:16.305368 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-17 00:56:16.305389 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-17 00:56:16.305399 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-17 00:56:16.305408 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-17 00:56:16.305418 | orchestrator | 2025-09-17 00:56:16.305427 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:56:16.305437 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:56:16.305448 | orchestrator | 2025-09-17 00:56:16.305457 | orchestrator | 2025-09-17 00:56:16.305467 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:56:16.305476 | orchestrator | Wednesday 17 September 2025 00:55:25 +0000 (0:00:06.547) 0:00:25.059 *** 2025-09-17 00:56:16.305486 | orchestrator | =============================================================================== 2025-09-17 00:56:16.305495 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.78s 2025-09-17 00:56:16.305505 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.55s 2025-09-17 00:56:16.305514 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.55s 2025-09-17 00:56:16.305524 | orchestrator | Create share directory -------------------------------------------------- 1.03s 2025-09-17 00:56:16.305535 | orchestrator | 2025-09-17 00:56:16.305546 | orchestrator | 2025-09-17 00:56:16.306054 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:56:16.306075 | orchestrator | 2025-09-17 00:56:16.306120 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:56:16.306132 | orchestrator | Wednesday 17 September 2025 00:53:39 +0000 (0:00:00.278) 0:00:00.278 *** 2025-09-17 00:56:16.306142 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:56:16.306151 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:56:16.306161 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:56:16.306170 | orchestrator | 2025-09-17 00:56:16.306180 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:56:16.306190 | orchestrator | Wednesday 17 September 2025 00:53:39 +0000 (0:00:00.308) 0:00:00.586 *** 2025-09-17 00:56:16.306199 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-17 00:56:16.306209 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-17 00:56:16.306219 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-17 00:56:16.306229 | orchestrator | 2025-09-17 00:56:16.306238 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-17 00:56:16.306248 | orchestrator | 2025-09-17 00:56:16.306257 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 00:56:16.306267 | orchestrator | Wednesday 17 September 2025 00:53:40 +0000 (0:00:00.415) 0:00:01.002 *** 2025-09-17 00:56:16.306276 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:56:16.306286 | orchestrator | 2025-09-17 00:56:16.306296 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-17 00:56:16.306314 | orchestrator | Wednesday 17 September 2025 00:53:40 +0000 (0:00:00.538) 0:00:01.540 *** 2025-09-17 00:56:16.306329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.306358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.306399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.306413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306489 | orchestrator | 2025-09-17 00:56:16.306499 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-17 00:56:16.306509 | orchestrator | Wednesday 17 September 2025 00:53:42 +0000 (0:00:01.808) 0:00:03.348 *** 2025-09-17 00:56:16.306519 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-17 00:56:16.306529 | orchestrator | 2025-09-17 00:56:16.306539 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-17 00:56:16.306552 | orchestrator | Wednesday 17 September 2025 00:53:43 +0000 (0:00:00.809) 0:00:04.157 *** 2025-09-17 00:56:16.306562 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:56:16.306572 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:56:16.306582 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:56:16.306591 | orchestrator | 2025-09-17 00:56:16.306601 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-17 00:56:16.306611 | orchestrator | Wednesday 17 September 2025 00:53:43 +0000 (0:00:00.454) 0:00:04.612 *** 2025-09-17 00:56:16.306621 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 00:56:16.306632 | orchestrator | 2025-09-17 00:56:16.306643 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 00:56:16.306654 | orchestrator | Wednesday 17 September 2025 00:53:44 +0000 (0:00:00.665) 0:00:05.277 *** 2025-09-17 00:56:16.306666 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:56:16.306682 | orchestrator | 2025-09-17 00:56:16.306693 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-17 00:56:16.306704 | orchestrator | Wednesday 17 September 2025 00:53:45 +0000 (0:00:00.535) 0:00:05.813 *** 2025-09-17 00:56:16.306721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.306741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.306754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.306779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.306861 | orchestrator | 2025-09-17 00:56:16.306872 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-17 00:56:16.306883 | orchestrator | Wednesday 17 September 2025 00:53:48 +0000 (0:00:03.321) 0:00:09.135 *** 2025-09-17 00:56:16.306901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 00:56:16.306914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.306963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:56:16.306975 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.306988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 00:56:16.306999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.307090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:56:16.307109 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:56:16.307129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 00:56:16.307151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.307162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:56:16.307172 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:56:16.307182 | orchestrator | 2025-09-17 00:56:16.307192 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-17 00:56:16.307201 | orchestrator | Wednesday 17 September 2025 00:53:49 +0000 (0:00:00.815) 0:00:09.951 *** 2025-09-17 00:56:16.307212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 00:56:16.307223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.307238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:56:16.307254 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.307269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 00:56:16.307280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.307290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:56:16.307300 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:56:16.307310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 00:56:16.307327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.307344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 00:56:16.307354 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:56:16.307363 | orchestrator | 2025-09-17 00:56:16.307378 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-17 00:56:16.307388 | orchestrator | Wednesday 17 September 2025 00:53:50 +0000 (0:00:00.758) 0:00:10.710 *** 2025-09-17 00:56:16.307399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.307410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.307426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.307443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.307458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.307469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.307479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.307489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.307577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.307596 | orchestrator | 2025-09-17 00:56:16.307606 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-17 00:56:16.307616 | orchestrator | Wednesday 17 September 2025 00:53:53 +0000 (0:00:03.205) 0:00:13.915 *** 2025-09-17 00:56:16.307633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.307650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.307661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.307672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.307687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.307704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.307719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.307729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.307739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.307749 | orchestrator | 2025-09-17 00:56:16.307759 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-17 00:56:16.307768 | orchestrator | Wednesday 17 September 2025 00:53:58 +0000 (0:00:05.214) 0:00:19.130 *** 2025-09-17 00:56:16.307778 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:56:16.307787 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:56:16.307797 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:56:16.307806 | orchestrator | 2025-09-17 00:56:16.307816 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-17 00:56:16.307825 | orchestrator | Wednesday 17 September 2025 00:53:59 +0000 (0:00:01.403) 0:00:20.534 *** 2025-09-17 00:56:16.307834 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.307849 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:56:16.307859 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:56:16.307868 | orchestrator | 2025-09-17 00:56:16.307878 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-17 00:56:16.307887 | orchestrator | Wednesday 17 September 2025 00:54:00 +0000 (0:00:00.472) 0:00:21.006 *** 2025-09-17 00:56:16.307908 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.307972 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:56:16.307984 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:56:16.307993 | orchestrator | 2025-09-17 00:56:16.308002 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-17 00:56:16.308012 | orchestrator | Wednesday 17 September 2025 00:54:00 +0000 (0:00:00.262) 0:00:21.269 *** 2025-09-17 00:56:16.308021 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.308031 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:56:16.308040 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:56:16.308050 | orchestrator | 2025-09-17 00:56:16.308060 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-17 00:56:16.308069 | orchestrator | Wednesday 17 September 2025 00:54:01 +0000 (0:00:00.413) 0:00:21.682 *** 2025-09-17 00:56:16.308088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.308109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.308120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.308131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.308148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.308167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 00:56:16.308178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.308192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.308203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.308219 | orchestrator | 2025-09-17 00:56:16.308229 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 00:56:16.308239 | orchestrator | Wednesday 17 September 2025 00:54:03 +0000 (0:00:02.272) 0:00:23.955 *** 2025-09-17 00:56:16.308249 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.308258 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:56:16.308267 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:56:16.308277 | orchestrator | 2025-09-17 00:56:16.308286 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-17 00:56:16.308296 | orchestrator | Wednesday 17 September 2025 00:54:03 +0000 (0:00:00.257) 0:00:24.212 *** 2025-09-17 00:56:16.308305 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-17 00:56:16.308315 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-17 00:56:16.308324 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-17 00:56:16.308334 | orchestrator | 2025-09-17 00:56:16.308343 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-17 00:56:16.308353 | orchestrator | Wednesday 17 September 2025 00:54:05 +0000 (0:00:01.526) 0:00:25.739 *** 2025-09-17 00:56:16.308362 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 00:56:16.308371 | orchestrator | 2025-09-17 00:56:16.308381 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-17 00:56:16.308391 | orchestrator | Wednesday 17 September 2025 00:54:05 +0000 (0:00:00.761) 0:00:26.501 *** 2025-09-17 00:56:16.308400 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.308410 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:56:16.308419 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:56:16.308428 | orchestrator | 2025-09-17 00:56:16.308438 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-17 00:56:16.308447 | orchestrator | Wednesday 17 September 2025 00:54:06 +0000 (0:00:00.688) 0:00:27.189 *** 2025-09-17 00:56:16.308457 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 00:56:16.308466 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-17 00:56:16.308476 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-17 00:56:16.308485 | orchestrator | 2025-09-17 00:56:16.308495 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-17 00:56:16.308504 | orchestrator | Wednesday 17 September 2025 00:54:07 +0000 (0:00:00.995) 0:00:28.185 *** 2025-09-17 00:56:16.308514 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:56:16.308523 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:56:16.308533 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:56:16.308542 | orchestrator | 2025-09-17 00:56:16.308557 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-17 00:56:16.308567 | orchestrator | Wednesday 17 September 2025 00:54:07 +0000 (0:00:00.295) 0:00:28.480 *** 2025-09-17 00:56:16.308577 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-17 00:56:16.308586 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-17 00:56:16.308596 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-17 00:56:16.308605 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-17 00:56:16.308615 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-17 00:56:16.308624 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-17 00:56:16.308634 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-17 00:56:16.308644 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-17 00:56:16.308659 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-17 00:56:16.308673 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-17 00:56:16.308683 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-17 00:56:16.308692 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-17 00:56:16.308702 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-17 00:56:16.308711 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-17 00:56:16.308721 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-17 00:56:16.308730 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 00:56:16.308740 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 00:56:16.308749 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 00:56:16.308759 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 00:56:16.308768 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 00:56:16.308778 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 00:56:16.308787 | orchestrator | 2025-09-17 00:56:16.308796 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-17 00:56:16.308806 | orchestrator | Wednesday 17 September 2025 00:54:16 +0000 (0:00:08.994) 0:00:37.474 *** 2025-09-17 00:56:16.308815 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 00:56:16.308825 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 00:56:16.308834 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 00:56:16.308843 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 00:56:16.308853 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 00:56:16.308862 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 00:56:16.308872 | orchestrator | 2025-09-17 00:56:16.308881 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-17 00:56:16.308890 | orchestrator | Wednesday 17 September 2025 00:54:19 +0000 (0:00:02.794) 0:00:40.269 *** 2025-09-17 00:56:16.308905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.308933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.308955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 00:56:16.308967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.308977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.308987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 00:56:16.309004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.309021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.309035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 00:56:16.309045 | orchestrator | 2025-09-17 00:56:16.309055 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 00:56:16.309065 | orchestrator | Wednesday 17 September 2025 00:54:22 +0000 (0:00:02.357) 0:00:42.627 *** 2025-09-17 00:56:16.309074 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.309084 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:56:16.309093 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:56:16.309103 | orchestrator | 2025-09-17 00:56:16.309113 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-17 00:56:16.309122 | orchestrator | Wednesday 17 September 2025 00:54:22 +0000 (0:00:00.296) 0:00:42.923 *** 2025-09-17 00:56:16.309131 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:56:16.309141 | orchestrator | 2025-09-17 00:56:16.309150 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-17 00:56:16.309160 | orchestrator | Wednesday 17 September 2025 00:54:24 +0000 (0:00:02.430) 0:00:45.353 *** 2025-09-17 00:56:16.309169 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:56:16.309178 | orchestrator | 2025-09-17 00:56:16.309188 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-17 00:56:16.309197 | orchestrator | Wednesday 17 September 2025 00:54:27 +0000 (0:00:02.355) 0:00:47.709 *** 2025-09-17 00:56:16.309207 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:56:16.309216 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:56:16.309226 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:56:16.309235 | orchestrator | 2025-09-17 00:56:16.309244 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-17 00:56:16.309254 | orchestrator | Wednesday 17 September 2025 00:54:27 +0000 (0:00:00.790) 0:00:48.500 *** 2025-09-17 00:56:16.309263 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:56:16.309273 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:56:16.309282 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:56:16.309291 | orchestrator | 2025-09-17 00:56:16.309301 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-17 00:56:16.309310 | orchestrator | Wednesday 17 September 2025 00:54:28 +0000 (0:00:00.446) 0:00:48.946 *** 2025-09-17 00:56:16.309320 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.309329 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:56:16.309346 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:56:16.309356 | orchestrator | 2025-09-17 00:56:16.309365 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-17 00:56:16.309375 | orchestrator | Wednesday 17 September 2025 00:54:28 +0000 (0:00:00.338) 0:00:49.284 *** 2025-09-17 00:56:16.309384 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:56:16.309394 | orchestrator | 2025-09-17 00:56:16.309403 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-17 00:56:16.309413 | orchestrator | Wednesday 17 September 2025 00:54:43 +0000 (0:00:14.653) 0:01:03.938 *** 2025-09-17 00:56:16.309422 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:56:16.309432 | orchestrator | 2025-09-17 00:56:16.309441 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-17 00:56:16.309451 | orchestrator | Wednesday 17 September 2025 00:54:53 +0000 (0:00:10.376) 0:01:14.314 *** 2025-09-17 00:56:16.309460 | orchestrator | 2025-09-17 00:56:16.309470 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-17 00:56:16.309479 | orchestrator | Wednesday 17 September 2025 00:54:53 +0000 (0:00:00.072) 0:01:14.387 *** 2025-09-17 00:56:16.309489 | orchestrator | 2025-09-17 00:56:16.309498 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-17 00:56:16.309508 | orchestrator | Wednesday 17 September 2025 00:54:53 +0000 (0:00:00.066) 0:01:14.453 *** 2025-09-17 00:56:16.309517 | orchestrator | 2025-09-17 00:56:16.309527 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-17 00:56:16.309541 | orchestrator | Wednesday 17 September 2025 00:54:53 +0000 (0:00:00.065) 0:01:14.518 *** 2025-09-17 00:56:16.309551 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:56:16.309561 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:56:16.309570 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:56:16.309580 | orchestrator | 2025-09-17 00:56:16.309589 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-17 00:56:16.309599 | orchestrator | Wednesday 17 September 2025 00:55:11 +0000 (0:00:17.645) 0:01:32.164 *** 2025-09-17 00:56:16.309608 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:56:16.309618 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:56:16.309627 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:56:16.309636 | orchestrator | 2025-09-17 00:56:16.309646 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-17 00:56:16.309656 | orchestrator | Wednesday 17 September 2025 00:55:19 +0000 (0:00:07.568) 0:01:39.733 *** 2025-09-17 00:56:16.309665 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:56:16.309674 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:56:16.309684 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:56:16.309693 | orchestrator | 2025-09-17 00:56:16.309703 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 00:56:16.309712 | orchestrator | Wednesday 17 September 2025 00:55:25 +0000 (0:00:06.728) 0:01:46.462 *** 2025-09-17 00:56:16.309722 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:56:16.309731 | orchestrator | 2025-09-17 00:56:16.309741 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-17 00:56:16.309754 | orchestrator | Wednesday 17 September 2025 00:55:26 +0000 (0:00:00.716) 0:01:47.178 *** 2025-09-17 00:56:16.309764 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:56:16.309774 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:56:16.309783 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:56:16.309793 | orchestrator | 2025-09-17 00:56:16.309803 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-17 00:56:16.309812 | orchestrator | Wednesday 17 September 2025 00:55:27 +0000 (0:00:00.754) 0:01:47.933 *** 2025-09-17 00:56:16.309822 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:56:16.309831 | orchestrator | 2025-09-17 00:56:16.309841 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-17 00:56:16.309850 | orchestrator | Wednesday 17 September 2025 00:55:29 +0000 (0:00:01.808) 0:01:49.741 *** 2025-09-17 00:56:16.309863 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-17 00:56:16.309873 | orchestrator | 2025-09-17 00:56:16.309883 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-17 00:56:16.309892 | orchestrator | Wednesday 17 September 2025 00:55:40 +0000 (0:00:11.411) 0:02:01.152 *** 2025-09-17 00:56:16.309902 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-17 00:56:16.309912 | orchestrator | 2025-09-17 00:56:16.309936 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-17 00:56:16.309945 | orchestrator | Wednesday 17 September 2025 00:56:03 +0000 (0:00:22.995) 0:02:24.148 *** 2025-09-17 00:56:16.309955 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-17 00:56:16.309964 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-17 00:56:16.309974 | orchestrator | 2025-09-17 00:56:16.309983 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-17 00:56:16.309993 | orchestrator | Wednesday 17 September 2025 00:56:10 +0000 (0:00:06.702) 0:02:30.851 *** 2025-09-17 00:56:16.310002 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.310011 | orchestrator | 2025-09-17 00:56:16.310077 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-17 00:56:16.310087 | orchestrator | Wednesday 17 September 2025 00:56:10 +0000 (0:00:00.136) 0:02:30.988 *** 2025-09-17 00:56:16.310096 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.310106 | orchestrator | 2025-09-17 00:56:16.310115 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-17 00:56:16.310125 | orchestrator | Wednesday 17 September 2025 00:56:10 +0000 (0:00:00.137) 0:02:31.125 *** 2025-09-17 00:56:16.310134 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.310144 | orchestrator | 2025-09-17 00:56:16.310153 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-17 00:56:16.310162 | orchestrator | Wednesday 17 September 2025 00:56:10 +0000 (0:00:00.147) 0:02:31.273 *** 2025-09-17 00:56:16.310172 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.310181 | orchestrator | 2025-09-17 00:56:16.310191 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-17 00:56:16.310200 | orchestrator | Wednesday 17 September 2025 00:56:11 +0000 (0:00:00.614) 0:02:31.888 *** 2025-09-17 00:56:16.310209 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:56:16.310219 | orchestrator | 2025-09-17 00:56:16.310228 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 00:56:16.310238 | orchestrator | Wednesday 17 September 2025 00:56:14 +0000 (0:00:03.509) 0:02:35.398 *** 2025-09-17 00:56:16.310247 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:56:16.310257 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:56:16.310266 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:56:16.310276 | orchestrator | 2025-09-17 00:56:16.310285 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:56:16.310296 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-17 00:56:16.310307 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-17 00:56:16.310323 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-17 00:56:16.310333 | orchestrator | 2025-09-17 00:56:16.310343 | orchestrator | 2025-09-17 00:56:16.310352 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:56:16.310362 | orchestrator | Wednesday 17 September 2025 00:56:15 +0000 (0:00:00.416) 0:02:35.815 *** 2025-09-17 00:56:16.310371 | orchestrator | =============================================================================== 2025-09-17 00:56:16.310387 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.00s 2025-09-17 00:56:16.310397 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 17.65s 2025-09-17 00:56:16.310406 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.65s 2025-09-17 00:56:16.310416 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.41s 2025-09-17 00:56:16.310425 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.38s 2025-09-17 00:56:16.310435 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.99s 2025-09-17 00:56:16.310444 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.57s 2025-09-17 00:56:16.310454 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.73s 2025-09-17 00:56:16.310463 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.70s 2025-09-17 00:56:16.310472 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.21s 2025-09-17 00:56:16.310487 | orchestrator | keystone : Creating default user role ----------------------------------- 3.51s 2025-09-17 00:56:16.310496 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.32s 2025-09-17 00:56:16.310506 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.21s 2025-09-17 00:56:16.310516 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.79s 2025-09-17 00:56:16.310525 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.43s 2025-09-17 00:56:16.310535 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.36s 2025-09-17 00:56:16.310544 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.36s 2025-09-17 00:56:16.310554 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.27s 2025-09-17 00:56:16.310563 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.81s 2025-09-17 00:56:16.310573 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.81s 2025-09-17 00:56:16.310582 | orchestrator | 2025-09-17 00:56:16 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:56:16.310592 | orchestrator | 2025-09-17 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:19.366603 | orchestrator | 2025-09-17 00:56:19 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:19.366763 | orchestrator | 2025-09-17 00:56:19 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:19.366788 | orchestrator | 2025-09-17 00:56:19 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:19.367558 | orchestrator | 2025-09-17 00:56:19 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:19.368295 | orchestrator | 2025-09-17 00:56:19 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:56:19.368310 | orchestrator | 2025-09-17 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:22.392638 | orchestrator | 2025-09-17 00:56:22 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:22.393632 | orchestrator | 2025-09-17 00:56:22 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:22.394009 | orchestrator | 2025-09-17 00:56:22 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:22.394662 | orchestrator | 2025-09-17 00:56:22 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:22.395629 | orchestrator | 2025-09-17 00:56:22 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state STARTED 2025-09-17 00:56:22.395686 | orchestrator | 2025-09-17 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:25.435534 | orchestrator | 2025-09-17 00:56:25 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:25.437123 | orchestrator | 2025-09-17 00:56:25 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:25.439159 | orchestrator | 2025-09-17 00:56:25 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:25.439665 | orchestrator | 2025-09-17 00:56:25 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:25.441836 | orchestrator | 2025-09-17 00:56:25 | INFO  | Task 3ad8ae32-6277-4e79-8ef1-6e991cd85904 is in state SUCCESS 2025-09-17 00:56:25.442544 | orchestrator | 2025-09-17 00:56:25 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:25.442574 | orchestrator | 2025-09-17 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:28.480474 | orchestrator | 2025-09-17 00:56:28 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:28.482241 | orchestrator | 2025-09-17 00:56:28 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:28.484454 | orchestrator | 2025-09-17 00:56:28 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:28.486321 | orchestrator | 2025-09-17 00:56:28 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:28.489348 | orchestrator | 2025-09-17 00:56:28 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:28.489675 | orchestrator | 2025-09-17 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:31.541350 | orchestrator | 2025-09-17 00:56:31 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:31.542850 | orchestrator | 2025-09-17 00:56:31 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:31.545060 | orchestrator | 2025-09-17 00:56:31 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:31.548352 | orchestrator | 2025-09-17 00:56:31 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:31.549819 | orchestrator | 2025-09-17 00:56:31 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:31.549851 | orchestrator | 2025-09-17 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:34.597153 | orchestrator | 2025-09-17 00:56:34 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:34.598625 | orchestrator | 2025-09-17 00:56:34 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:34.600806 | orchestrator | 2025-09-17 00:56:34 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:34.605477 | orchestrator | 2025-09-17 00:56:34 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:34.606129 | orchestrator | 2025-09-17 00:56:34 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:34.606911 | orchestrator | 2025-09-17 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:37.648271 | orchestrator | 2025-09-17 00:56:37 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:37.649211 | orchestrator | 2025-09-17 00:56:37 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:37.650340 | orchestrator | 2025-09-17 00:56:37 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:37.651485 | orchestrator | 2025-09-17 00:56:37 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:37.652843 | orchestrator | 2025-09-17 00:56:37 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:37.652998 | orchestrator | 2025-09-17 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:40.701445 | orchestrator | 2025-09-17 00:56:40 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:40.705144 | orchestrator | 2025-09-17 00:56:40 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:40.705435 | orchestrator | 2025-09-17 00:56:40 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:40.707566 | orchestrator | 2025-09-17 00:56:40 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:40.710203 | orchestrator | 2025-09-17 00:56:40 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:40.710227 | orchestrator | 2025-09-17 00:56:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:43.763011 | orchestrator | 2025-09-17 00:56:43 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:43.764255 | orchestrator | 2025-09-17 00:56:43 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:43.765997 | orchestrator | 2025-09-17 00:56:43 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:43.767854 | orchestrator | 2025-09-17 00:56:43 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:43.769052 | orchestrator | 2025-09-17 00:56:43 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:43.769248 | orchestrator | 2025-09-17 00:56:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:46.810480 | orchestrator | 2025-09-17 00:56:46 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:46.810603 | orchestrator | 2025-09-17 00:56:46 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:46.811525 | orchestrator | 2025-09-17 00:56:46 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:46.812477 | orchestrator | 2025-09-17 00:56:46 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:46.816180 | orchestrator | 2025-09-17 00:56:46 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:46.816228 | orchestrator | 2025-09-17 00:56:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:49.858988 | orchestrator | 2025-09-17 00:56:49 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:49.860534 | orchestrator | 2025-09-17 00:56:49 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:49.862440 | orchestrator | 2025-09-17 00:56:49 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:49.864058 | orchestrator | 2025-09-17 00:56:49 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:49.865676 | orchestrator | 2025-09-17 00:56:49 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:49.865700 | orchestrator | 2025-09-17 00:56:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:52.918193 | orchestrator | 2025-09-17 00:56:52 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:52.919999 | orchestrator | 2025-09-17 00:56:52 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:52.921354 | orchestrator | 2025-09-17 00:56:52 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:52.923068 | orchestrator | 2025-09-17 00:56:52 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:52.924460 | orchestrator | 2025-09-17 00:56:52 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:52.924480 | orchestrator | 2025-09-17 00:56:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:55.963232 | orchestrator | 2025-09-17 00:56:55 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:55.964486 | orchestrator | 2025-09-17 00:56:55 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:55.965389 | orchestrator | 2025-09-17 00:56:55 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:55.966459 | orchestrator | 2025-09-17 00:56:55 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:55.968170 | orchestrator | 2025-09-17 00:56:55 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:55.968208 | orchestrator | 2025-09-17 00:56:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:56:58.996722 | orchestrator | 2025-09-17 00:56:58 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:56:58.997173 | orchestrator | 2025-09-17 00:56:58 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:56:58.998165 | orchestrator | 2025-09-17 00:56:58 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:56:58.999058 | orchestrator | 2025-09-17 00:56:58 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:56:58.999986 | orchestrator | 2025-09-17 00:56:58 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:56:59.000142 | orchestrator | 2025-09-17 00:56:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:02.043395 | orchestrator | 2025-09-17 00:57:02 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:02.044311 | orchestrator | 2025-09-17 00:57:02 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:02.044978 | orchestrator | 2025-09-17 00:57:02 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:02.045757 | orchestrator | 2025-09-17 00:57:02 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:02.048267 | orchestrator | 2025-09-17 00:57:02 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:02.048290 | orchestrator | 2025-09-17 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:05.078826 | orchestrator | 2025-09-17 00:57:05 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:05.078977 | orchestrator | 2025-09-17 00:57:05 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:05.079455 | orchestrator | 2025-09-17 00:57:05 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:05.080676 | orchestrator | 2025-09-17 00:57:05 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:05.082947 | orchestrator | 2025-09-17 00:57:05 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:05.083089 | orchestrator | 2025-09-17 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:08.187692 | orchestrator | 2025-09-17 00:57:08 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:08.187795 | orchestrator | 2025-09-17 00:57:08 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:08.187810 | orchestrator | 2025-09-17 00:57:08 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:08.187822 | orchestrator | 2025-09-17 00:57:08 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:08.187832 | orchestrator | 2025-09-17 00:57:08 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:08.187844 | orchestrator | 2025-09-17 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:11.194511 | orchestrator | 2025-09-17 00:57:11 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:11.194621 | orchestrator | 2025-09-17 00:57:11 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:11.195252 | orchestrator | 2025-09-17 00:57:11 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:11.195790 | orchestrator | 2025-09-17 00:57:11 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:11.197464 | orchestrator | 2025-09-17 00:57:11 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:11.197485 | orchestrator | 2025-09-17 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:14.221076 | orchestrator | 2025-09-17 00:57:14 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:14.221325 | orchestrator | 2025-09-17 00:57:14 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:14.221949 | orchestrator | 2025-09-17 00:57:14 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:14.223541 | orchestrator | 2025-09-17 00:57:14 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:14.225324 | orchestrator | 2025-09-17 00:57:14 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:14.225351 | orchestrator | 2025-09-17 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:17.247470 | orchestrator | 2025-09-17 00:57:17 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:17.248061 | orchestrator | 2025-09-17 00:57:17 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:17.249617 | orchestrator | 2025-09-17 00:57:17 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:17.250318 | orchestrator | 2025-09-17 00:57:17 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:17.251125 | orchestrator | 2025-09-17 00:57:17 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:17.251152 | orchestrator | 2025-09-17 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:20.282435 | orchestrator | 2025-09-17 00:57:20 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:20.282519 | orchestrator | 2025-09-17 00:57:20 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:20.282532 | orchestrator | 2025-09-17 00:57:20 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:20.283205 | orchestrator | 2025-09-17 00:57:20 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:20.283705 | orchestrator | 2025-09-17 00:57:20 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:20.283909 | orchestrator | 2025-09-17 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:23.314358 | orchestrator | 2025-09-17 00:57:23 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:23.320625 | orchestrator | 2025-09-17 00:57:23 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:23.321114 | orchestrator | 2025-09-17 00:57:23 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:23.321700 | orchestrator | 2025-09-17 00:57:23 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:23.322317 | orchestrator | 2025-09-17 00:57:23 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:23.322362 | orchestrator | 2025-09-17 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:26.343213 | orchestrator | 2025-09-17 00:57:26 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:26.343875 | orchestrator | 2025-09-17 00:57:26 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:26.344369 | orchestrator | 2025-09-17 00:57:26 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:26.345090 | orchestrator | 2025-09-17 00:57:26 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:26.346345 | orchestrator | 2025-09-17 00:57:26 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:26.346371 | orchestrator | 2025-09-17 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:29.372512 | orchestrator | 2025-09-17 00:57:29 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:29.372808 | orchestrator | 2025-09-17 00:57:29 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:29.375524 | orchestrator | 2025-09-17 00:57:29 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:29.376201 | orchestrator | 2025-09-17 00:57:29 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:29.376521 | orchestrator | 2025-09-17 00:57:29 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:29.376539 | orchestrator | 2025-09-17 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:32.396469 | orchestrator | 2025-09-17 00:57:32 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:32.396574 | orchestrator | 2025-09-17 00:57:32 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:32.397032 | orchestrator | 2025-09-17 00:57:32 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:32.397509 | orchestrator | 2025-09-17 00:57:32 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:32.398257 | orchestrator | 2025-09-17 00:57:32 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:32.398281 | orchestrator | 2025-09-17 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:35.419555 | orchestrator | 2025-09-17 00:57:35 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:35.420338 | orchestrator | 2025-09-17 00:57:35 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:35.421760 | orchestrator | 2025-09-17 00:57:35 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:35.422291 | orchestrator | 2025-09-17 00:57:35 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:35.423310 | orchestrator | 2025-09-17 00:57:35 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:35.423345 | orchestrator | 2025-09-17 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:38.445618 | orchestrator | 2025-09-17 00:57:38 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:38.445733 | orchestrator | 2025-09-17 00:57:38 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:38.446758 | orchestrator | 2025-09-17 00:57:38 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:38.447381 | orchestrator | 2025-09-17 00:57:38 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:38.448819 | orchestrator | 2025-09-17 00:57:38 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:38.448847 | orchestrator | 2025-09-17 00:57:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:41.473250 | orchestrator | 2025-09-17 00:57:41 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:41.473358 | orchestrator | 2025-09-17 00:57:41 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:41.473797 | orchestrator | 2025-09-17 00:57:41 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:41.474366 | orchestrator | 2025-09-17 00:57:41 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:41.475027 | orchestrator | 2025-09-17 00:57:41 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:41.475050 | orchestrator | 2025-09-17 00:57:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:44.503724 | orchestrator | 2025-09-17 00:57:44 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:44.503829 | orchestrator | 2025-09-17 00:57:44 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:44.504329 | orchestrator | 2025-09-17 00:57:44 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:44.507319 | orchestrator | 2025-09-17 00:57:44 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:44.508035 | orchestrator | 2025-09-17 00:57:44 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state STARTED 2025-09-17 00:57:44.508062 | orchestrator | 2025-09-17 00:57:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:47.537625 | orchestrator | 2025-09-17 00:57:47 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:47.539009 | orchestrator | 2025-09-17 00:57:47 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:47.540378 | orchestrator | 2025-09-17 00:57:47 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:47.541068 | orchestrator | 2025-09-17 00:57:47 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:47.542824 | orchestrator | 2025-09-17 00:57:47 | INFO  | Task 09329937-54d7-44f8-8c99-918e8fc33657 is in state SUCCESS 2025-09-17 00:57:47.544078 | orchestrator | 2025-09-17 00:57:47.544110 | orchestrator | 2025-09-17 00:57:47.544116 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-17 00:57:47.544121 | orchestrator | 2025-09-17 00:57:47.544127 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-17 00:57:47.544132 | orchestrator | Wednesday 17 September 2025 00:55:29 +0000 (0:00:00.246) 0:00:00.246 *** 2025-09-17 00:57:47.544138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-17 00:57:47.544145 | orchestrator | 2025-09-17 00:57:47.544151 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-17 00:57:47.544156 | orchestrator | Wednesday 17 September 2025 00:55:30 +0000 (0:00:00.248) 0:00:00.495 *** 2025-09-17 00:57:47.544161 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-17 00:57:47.544167 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-17 00:57:47.544172 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-17 00:57:47.544177 | orchestrator | 2025-09-17 00:57:47.544182 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-17 00:57:47.544187 | orchestrator | Wednesday 17 September 2025 00:55:31 +0000 (0:00:01.259) 0:00:01.754 *** 2025-09-17 00:57:47.544192 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-17 00:57:47.544197 | orchestrator | 2025-09-17 00:57:47.544202 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-17 00:57:47.544207 | orchestrator | Wednesday 17 September 2025 00:55:32 +0000 (0:00:01.181) 0:00:02.935 *** 2025-09-17 00:57:47.544212 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544217 | orchestrator | 2025-09-17 00:57:47.544222 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-17 00:57:47.544227 | orchestrator | Wednesday 17 September 2025 00:55:33 +0000 (0:00:01.006) 0:00:03.942 *** 2025-09-17 00:57:47.544232 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544237 | orchestrator | 2025-09-17 00:57:47.544242 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-17 00:57:47.544247 | orchestrator | Wednesday 17 September 2025 00:55:34 +0000 (0:00:00.951) 0:00:04.894 *** 2025-09-17 00:57:47.544252 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-17 00:57:47.544256 | orchestrator | ok: [testbed-manager] 2025-09-17 00:57:47.544262 | orchestrator | 2025-09-17 00:57:47.544267 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-17 00:57:47.544271 | orchestrator | Wednesday 17 September 2025 00:56:12 +0000 (0:00:37.627) 0:00:42.521 *** 2025-09-17 00:57:47.544276 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-17 00:57:47.544281 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-17 00:57:47.544286 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-17 00:57:47.544291 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-17 00:57:47.544296 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-17 00:57:47.544301 | orchestrator | 2025-09-17 00:57:47.544306 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-17 00:57:47.544311 | orchestrator | Wednesday 17 September 2025 00:56:16 +0000 (0:00:04.105) 0:00:46.627 *** 2025-09-17 00:57:47.544316 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-17 00:57:47.544321 | orchestrator | 2025-09-17 00:57:47.544326 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-17 00:57:47.544331 | orchestrator | Wednesday 17 September 2025 00:56:16 +0000 (0:00:00.432) 0:00:47.060 *** 2025-09-17 00:57:47.544336 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:57:47.544341 | orchestrator | 2025-09-17 00:57:47.544346 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-17 00:57:47.544360 | orchestrator | Wednesday 17 September 2025 00:56:16 +0000 (0:00:00.131) 0:00:47.191 *** 2025-09-17 00:57:47.544369 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:57:47.544374 | orchestrator | 2025-09-17 00:57:47.544379 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-17 00:57:47.544384 | orchestrator | Wednesday 17 September 2025 00:56:17 +0000 (0:00:00.322) 0:00:47.514 *** 2025-09-17 00:57:47.544389 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544394 | orchestrator | 2025-09-17 00:57:47.544399 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-17 00:57:47.544404 | orchestrator | Wednesday 17 September 2025 00:56:19 +0000 (0:00:02.257) 0:00:49.771 *** 2025-09-17 00:57:47.544409 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544414 | orchestrator | 2025-09-17 00:57:47.544418 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-17 00:57:47.544423 | orchestrator | Wednesday 17 September 2025 00:56:20 +0000 (0:00:00.753) 0:00:50.525 *** 2025-09-17 00:57:47.544428 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544433 | orchestrator | 2025-09-17 00:57:47.544438 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-17 00:57:47.544443 | orchestrator | Wednesday 17 September 2025 00:56:20 +0000 (0:00:00.774) 0:00:51.299 *** 2025-09-17 00:57:47.544448 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-17 00:57:47.544453 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-17 00:57:47.544458 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-17 00:57:47.544463 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-17 00:57:47.544468 | orchestrator | 2025-09-17 00:57:47.544473 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:57:47.544478 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 00:57:47.544483 | orchestrator | 2025-09-17 00:57:47.544488 | orchestrator | 2025-09-17 00:57:47.544499 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:57:47.544504 | orchestrator | Wednesday 17 September 2025 00:56:22 +0000 (0:00:01.383) 0:00:52.683 *** 2025-09-17 00:57:47.544509 | orchestrator | =============================================================================== 2025-09-17 00:57:47.544514 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.63s 2025-09-17 00:57:47.544519 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.11s 2025-09-17 00:57:47.544524 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.26s 2025-09-17 00:57:47.544529 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.38s 2025-09-17 00:57:47.544534 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.26s 2025-09-17 00:57:47.544539 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.18s 2025-09-17 00:57:47.544543 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.01s 2025-09-17 00:57:47.544548 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2025-09-17 00:57:47.544553 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.77s 2025-09-17 00:57:47.544558 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.75s 2025-09-17 00:57:47.544563 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2025-09-17 00:57:47.544568 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2025-09-17 00:57:47.544573 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2025-09-17 00:57:47.544578 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-17 00:57:47.544583 | orchestrator | 2025-09-17 00:57:47.544588 | orchestrator | 2025-09-17 00:57:47.544593 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-17 00:57:47.544598 | orchestrator | 2025-09-17 00:57:47.544606 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-17 00:57:47.544611 | orchestrator | Wednesday 17 September 2025 00:56:25 +0000 (0:00:00.205) 0:00:00.205 *** 2025-09-17 00:57:47.544616 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544621 | orchestrator | 2025-09-17 00:57:47.544626 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-17 00:57:47.544631 | orchestrator | Wednesday 17 September 2025 00:56:26 +0000 (0:00:01.203) 0:00:01.408 *** 2025-09-17 00:57:47.544635 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544640 | orchestrator | 2025-09-17 00:57:47.544645 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-17 00:57:47.544650 | orchestrator | Wednesday 17 September 2025 00:56:27 +0000 (0:00:00.981) 0:00:02.389 *** 2025-09-17 00:57:47.544655 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544660 | orchestrator | 2025-09-17 00:57:47.544665 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-17 00:57:47.544670 | orchestrator | Wednesday 17 September 2025 00:56:28 +0000 (0:00:00.931) 0:00:03.321 *** 2025-09-17 00:57:47.544674 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544679 | orchestrator | 2025-09-17 00:57:47.544684 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-17 00:57:47.544689 | orchestrator | Wednesday 17 September 2025 00:56:29 +0000 (0:00:01.126) 0:00:04.447 *** 2025-09-17 00:57:47.544694 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544699 | orchestrator | 2025-09-17 00:57:47.544704 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-17 00:57:47.544709 | orchestrator | Wednesday 17 September 2025 00:56:30 +0000 (0:00:01.087) 0:00:05.534 *** 2025-09-17 00:57:47.544715 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544720 | orchestrator | 2025-09-17 00:57:47.544726 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-17 00:57:47.544735 | orchestrator | Wednesday 17 September 2025 00:56:31 +0000 (0:00:01.065) 0:00:06.600 *** 2025-09-17 00:57:47.544741 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544746 | orchestrator | 2025-09-17 00:57:47.544752 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-17 00:57:47.544758 | orchestrator | Wednesday 17 September 2025 00:56:33 +0000 (0:00:02.061) 0:00:08.662 *** 2025-09-17 00:57:47.544764 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544769 | orchestrator | 2025-09-17 00:57:47.544775 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-17 00:57:47.544780 | orchestrator | Wednesday 17 September 2025 00:56:35 +0000 (0:00:01.175) 0:00:09.837 *** 2025-09-17 00:57:47.544786 | orchestrator | changed: [testbed-manager] 2025-09-17 00:57:47.544792 | orchestrator | 2025-09-17 00:57:47.544797 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-17 00:57:47.544803 | orchestrator | Wednesday 17 September 2025 00:57:22 +0000 (0:00:47.071) 0:00:56.908 *** 2025-09-17 00:57:47.544809 | orchestrator | skipping: [testbed-manager] 2025-09-17 00:57:47.544814 | orchestrator | 2025-09-17 00:57:47.544820 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-17 00:57:47.544825 | orchestrator | 2025-09-17 00:57:47.544831 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-17 00:57:47.544837 | orchestrator | Wednesday 17 September 2025 00:57:22 +0000 (0:00:00.135) 0:00:57.044 *** 2025-09-17 00:57:47.544842 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:57:47.544848 | orchestrator | 2025-09-17 00:57:47.544854 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-17 00:57:47.544859 | orchestrator | 2025-09-17 00:57:47.544865 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-17 00:57:47.544871 | orchestrator | Wednesday 17 September 2025 00:57:34 +0000 (0:00:11.746) 0:01:08.790 *** 2025-09-17 00:57:47.544876 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:57:47.544882 | orchestrator | 2025-09-17 00:57:47.544894 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-17 00:57:47.544899 | orchestrator | 2025-09-17 00:57:47.544908 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-17 00:57:47.544914 | orchestrator | Wednesday 17 September 2025 00:57:45 +0000 (0:00:11.376) 0:01:20.166 *** 2025-09-17 00:57:47.544919 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:57:47.544948 | orchestrator | 2025-09-17 00:57:47.544954 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:57:47.544960 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 00:57:47.544966 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:57:47.544972 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:57:47.544978 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:57:47.544984 | orchestrator | 2025-09-17 00:57:47.544990 | orchestrator | 2025-09-17 00:57:47.544995 | orchestrator | 2025-09-17 00:57:47.545001 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:57:47.545007 | orchestrator | Wednesday 17 September 2025 00:57:46 +0000 (0:00:01.385) 0:01:21.552 *** 2025-09-17 00:57:47.545012 | orchestrator | =============================================================================== 2025-09-17 00:57:47.545018 | orchestrator | Create admin user ------------------------------------------------------ 47.07s 2025-09-17 00:57:47.545024 | orchestrator | Restart ceph manager service ------------------------------------------- 24.51s 2025-09-17 00:57:47.545030 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.06s 2025-09-17 00:57:47.545035 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.20s 2025-09-17 00:57:47.545041 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.18s 2025-09-17 00:57:47.545047 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.13s 2025-09-17 00:57:47.545053 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.09s 2025-09-17 00:57:47.545059 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.07s 2025-09-17 00:57:47.545065 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.98s 2025-09-17 00:57:47.545071 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.93s 2025-09-17 00:57:47.545076 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-09-17 00:57:47.545081 | orchestrator | 2025-09-17 00:57:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:50.580517 | orchestrator | 2025-09-17 00:57:50 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:50.581019 | orchestrator | 2025-09-17 00:57:50 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:50.581903 | orchestrator | 2025-09-17 00:57:50 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:50.582793 | orchestrator | 2025-09-17 00:57:50 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:50.582824 | orchestrator | 2025-09-17 00:57:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:53.611716 | orchestrator | 2025-09-17 00:57:53 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:53.616238 | orchestrator | 2025-09-17 00:57:53 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:53.618321 | orchestrator | 2025-09-17 00:57:53 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:53.619618 | orchestrator | 2025-09-17 00:57:53 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:53.619819 | orchestrator | 2025-09-17 00:57:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:56.658170 | orchestrator | 2025-09-17 00:57:56 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:56.658276 | orchestrator | 2025-09-17 00:57:56 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:56.659295 | orchestrator | 2025-09-17 00:57:56 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:56.659994 | orchestrator | 2025-09-17 00:57:56 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:56.660018 | orchestrator | 2025-09-17 00:57:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:57:59.694753 | orchestrator | 2025-09-17 00:57:59 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:57:59.694856 | orchestrator | 2025-09-17 00:57:59 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:57:59.694869 | orchestrator | 2025-09-17 00:57:59 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:57:59.695331 | orchestrator | 2025-09-17 00:57:59 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:57:59.695358 | orchestrator | 2025-09-17 00:57:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:02.722199 | orchestrator | 2025-09-17 00:58:02 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:58:02.722448 | orchestrator | 2025-09-17 00:58:02 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:02.722468 | orchestrator | 2025-09-17 00:58:02 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:02.722562 | orchestrator | 2025-09-17 00:58:02 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:02.722578 | orchestrator | 2025-09-17 00:58:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:05.759501 | orchestrator | 2025-09-17 00:58:05 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:58:05.759679 | orchestrator | 2025-09-17 00:58:05 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:05.760224 | orchestrator | 2025-09-17 00:58:05 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:05.760849 | orchestrator | 2025-09-17 00:58:05 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:05.760875 | orchestrator | 2025-09-17 00:58:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:08.787191 | orchestrator | 2025-09-17 00:58:08 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:58:08.787288 | orchestrator | 2025-09-17 00:58:08 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:08.787565 | orchestrator | 2025-09-17 00:58:08 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:08.788199 | orchestrator | 2025-09-17 00:58:08 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:08.788237 | orchestrator | 2025-09-17 00:58:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:11.825401 | orchestrator | 2025-09-17 00:58:11 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state STARTED 2025-09-17 00:58:11.825659 | orchestrator | 2025-09-17 00:58:11 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:11.826379 | orchestrator | 2025-09-17 00:58:11 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:11.827116 | orchestrator | 2025-09-17 00:58:11 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:11.827146 | orchestrator | 2025-09-17 00:58:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:14.861128 | orchestrator | 2025-09-17 00:58:14 | INFO  | Task e83d7363-539f-427c-bd7f-c72dc165cdd5 is in state SUCCESS 2025-09-17 00:58:14.862066 | orchestrator | 2025-09-17 00:58:14.862118 | orchestrator | 2025-09-17 00:58:14.862317 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:58:14.862341 | orchestrator | 2025-09-17 00:58:14.862360 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:58:14.862378 | orchestrator | Wednesday 17 September 2025 00:56:20 +0000 (0:00:00.231) 0:00:00.231 *** 2025-09-17 00:58:14.862389 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:58:14.862401 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:58:14.862412 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:58:14.862423 | orchestrator | 2025-09-17 00:58:14.862434 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:58:14.862444 | orchestrator | Wednesday 17 September 2025 00:56:20 +0000 (0:00:00.238) 0:00:00.470 *** 2025-09-17 00:58:14.862455 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-17 00:58:14.862467 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-17 00:58:14.862477 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-17 00:58:14.862488 | orchestrator | 2025-09-17 00:58:14.862499 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-17 00:58:14.862509 | orchestrator | 2025-09-17 00:58:14.862520 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-17 00:58:14.862531 | orchestrator | Wednesday 17 September 2025 00:56:21 +0000 (0:00:00.413) 0:00:00.883 *** 2025-09-17 00:58:14.862542 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:58:14.862553 | orchestrator | 2025-09-17 00:58:14.862564 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-17 00:58:14.862574 | orchestrator | Wednesday 17 September 2025 00:56:21 +0000 (0:00:00.428) 0:00:01.312 *** 2025-09-17 00:58:14.862585 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-17 00:58:14.862596 | orchestrator | 2025-09-17 00:58:14.862607 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-17 00:58:14.862619 | orchestrator | Wednesday 17 September 2025 00:56:25 +0000 (0:00:04.173) 0:00:05.485 *** 2025-09-17 00:58:14.862632 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-17 00:58:14.862645 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-17 00:58:14.862657 | orchestrator | 2025-09-17 00:58:14.862669 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-17 00:58:14.862682 | orchestrator | Wednesday 17 September 2025 00:56:32 +0000 (0:00:06.808) 0:00:12.293 *** 2025-09-17 00:58:14.862694 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-17 00:58:14.862707 | orchestrator | 2025-09-17 00:58:14.862719 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-17 00:58:14.862732 | orchestrator | Wednesday 17 September 2025 00:56:36 +0000 (0:00:03.377) 0:00:15.670 *** 2025-09-17 00:58:14.862744 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 00:58:14.862756 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-17 00:58:14.862768 | orchestrator | 2025-09-17 00:58:14.862806 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-17 00:58:14.862820 | orchestrator | Wednesday 17 September 2025 00:56:40 +0000 (0:00:04.070) 0:00:19.741 *** 2025-09-17 00:58:14.862833 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 00:58:14.862844 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-17 00:58:14.862854 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-17 00:58:14.862865 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-17 00:58:14.862876 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-17 00:58:14.862887 | orchestrator | 2025-09-17 00:58:14.862897 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-17 00:58:14.862908 | orchestrator | Wednesday 17 September 2025 00:56:54 +0000 (0:00:13.923) 0:00:33.664 *** 2025-09-17 00:58:14.862919 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-17 00:58:14.862953 | orchestrator | 2025-09-17 00:58:14.862964 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-17 00:58:14.862975 | orchestrator | Wednesday 17 September 2025 00:56:58 +0000 (0:00:04.291) 0:00:37.955 *** 2025-09-17 00:58:14.862989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.863027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.863041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.863062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863148 | orchestrator | 2025-09-17 00:58:14.863159 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-17 00:58:14.863170 | orchestrator | Wednesday 17 September 2025 00:57:00 +0000 (0:00:02.179) 0:00:40.135 *** 2025-09-17 00:58:14.863187 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-17 00:58:14.863198 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-17 00:58:14.863209 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-17 00:58:14.863219 | orchestrator | 2025-09-17 00:58:14.863230 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-17 00:58:14.863241 | orchestrator | Wednesday 17 September 2025 00:57:02 +0000 (0:00:01.423) 0:00:41.559 *** 2025-09-17 00:58:14.863251 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:58:14.863263 | orchestrator | 2025-09-17 00:58:14.863273 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-17 00:58:14.863284 | orchestrator | Wednesday 17 September 2025 00:57:02 +0000 (0:00:00.244) 0:00:41.805 *** 2025-09-17 00:58:14.863294 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:58:14.863305 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:58:14.863316 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:58:14.863326 | orchestrator | 2025-09-17 00:58:14.863337 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-17 00:58:14.863347 | orchestrator | Wednesday 17 September 2025 00:57:03 +0000 (0:00:00.862) 0:00:42.667 *** 2025-09-17 00:58:14.863358 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:58:14.863369 | orchestrator | 2025-09-17 00:58:14.863380 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-17 00:58:14.863391 | orchestrator | Wednesday 17 September 2025 00:57:03 +0000 (0:00:00.584) 0:00:43.251 *** 2025-09-17 00:58:14.863402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.863426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.863439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.863459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.863546 | orchestrator | 2025-09-17 00:58:14.863557 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-17 00:58:14.863568 | orchestrator | Wednesday 17 September 2025 00:57:07 +0000 (0:00:03.501) 0:00:46.753 *** 2025-09-17 00:58:14.863579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:58:14.863590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863613 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:58:14.863634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:58:14.863646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863683 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:58:14.863695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:58:14.863706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863728 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:58:14.863739 | orchestrator | 2025-09-17 00:58:14.863750 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-17 00:58:14.863761 | orchestrator | Wednesday 17 September 2025 00:57:10 +0000 (0:00:03.205) 0:00:49.959 *** 2025-09-17 00:58:14.863784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:58:14.863803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863826 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:58:14.863837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:58:14.863848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863877 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:58:14.863900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:58:14.863913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.863953 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:58:14.863964 | orchestrator | 2025-09-17 00:58:14.863975 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-17 00:58:14.863986 | orchestrator | Wednesday 17 September 2025 00:57:11 +0000 (0:00:01.061) 0:00:51.020 *** 2025-09-17 00:58:14.863997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.864247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.864377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.864403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864522 | orchestrator | 2025-09-17 00:58:14.864533 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-17 00:58:14.864544 | orchestrator | Wednesday 17 September 2025 00:57:14 +0000 (0:00:03.197) 0:00:54.218 *** 2025-09-17 00:58:14.864555 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:58:14.864565 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:58:14.864576 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:58:14.864587 | orchestrator | 2025-09-17 00:58:14.864597 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-17 00:58:14.864608 | orchestrator | Wednesday 17 September 2025 00:57:17 +0000 (0:00:02.622) 0:00:56.840 *** 2025-09-17 00:58:14.864618 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 00:58:14.864629 | orchestrator | 2025-09-17 00:58:14.864640 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-17 00:58:14.864650 | orchestrator | Wednesday 17 September 2025 00:57:18 +0000 (0:00:01.013) 0:00:57.853 *** 2025-09-17 00:58:14.864661 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:58:14.864672 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:58:14.864682 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:58:14.864693 | orchestrator | 2025-09-17 00:58:14.864703 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-17 00:58:14.864714 | orchestrator | Wednesday 17 September 2025 00:57:19 +0000 (0:00:00.669) 0:00:58.523 *** 2025-09-17 00:58:14.864725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.864738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.864768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.864781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.864854 | orchestrator | 2025-09-17 00:58:14.864865 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-17 00:58:14.865066 | orchestrator | Wednesday 17 September 2025 00:57:28 +0000 (0:00:09.763) 0:01:08.286 *** 2025-09-17 00:58:14.865103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:58:14.865117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.865128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.865140 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:58:14.865151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:58:14.865172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 00:58:14.865197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.865209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.865221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.865232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:58:14.865254 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:58:14.865265 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:58:14.865276 | orchestrator | 2025-09-17 00:58:14.865287 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-17 00:58:14.865298 | orchestrator | Wednesday 17 September 2025 00:57:29 +0000 (0:00:00.539) 0:01:08.825 *** 2025-09-17 00:58:14.865309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.865332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.865344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 00:58:14.865356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.865373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.865385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.865396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.865420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.865432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:58:14.865443 | orchestrator | 2025-09-17 00:58:14.865454 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-17 00:58:14.865465 | orchestrator | Wednesday 17 September 2025 00:57:33 +0000 (0:00:04.160) 0:01:12.986 *** 2025-09-17 00:58:14.865476 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:58:14.865486 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:58:14.865497 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:58:14.865508 | orchestrator | 2025-09-17 00:58:14.865519 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-17 00:58:14.865529 | orchestrator | Wednesday 17 September 2025 00:57:34 +0000 (0:00:00.643) 0:01:13.630 *** 2025-09-17 00:58:14.865540 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:58:14.865550 | orchestrator | 2025-09-17 00:58:14.865561 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-17 00:58:14.865579 | orchestrator | Wednesday 17 September 2025 00:57:36 +0000 (0:00:02.559) 0:01:16.189 *** 2025-09-17 00:58:14.865590 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:58:14.865601 | orchestrator | 2025-09-17 00:58:14.865612 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-17 00:58:14.865622 | orchestrator | Wednesday 17 September 2025 00:57:39 +0000 (0:00:02.437) 0:01:18.627 *** 2025-09-17 00:58:14.865633 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:58:14.865643 | orchestrator | 2025-09-17 00:58:14.865654 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-17 00:58:14.865665 | orchestrator | Wednesday 17 September 2025 00:57:51 +0000 (0:00:12.319) 0:01:30.947 *** 2025-09-17 00:58:14.865676 | orchestrator | 2025-09-17 00:58:14.865686 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-17 00:58:14.865699 | orchestrator | Wednesday 17 September 2025 00:57:51 +0000 (0:00:00.061) 0:01:31.009 *** 2025-09-17 00:58:14.865711 | orchestrator | 2025-09-17 00:58:14.865723 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-17 00:58:14.865736 | orchestrator | Wednesday 17 September 2025 00:57:51 +0000 (0:00:00.066) 0:01:31.076 *** 2025-09-17 00:58:14.865748 | orchestrator | 2025-09-17 00:58:14.865760 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-17 00:58:14.865772 | orchestrator | Wednesday 17 September 2025 00:57:51 +0000 (0:00:00.071) 0:01:31.148 *** 2025-09-17 00:58:14.865784 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:58:14.865796 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:58:14.865809 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:58:14.865821 | orchestrator | 2025-09-17 00:58:14.865833 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-17 00:58:14.865845 | orchestrator | Wednesday 17 September 2025 00:57:58 +0000 (0:00:07.143) 0:01:38.291 *** 2025-09-17 00:58:14.865857 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:58:14.865869 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:58:14.865881 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:58:14.865893 | orchestrator | 2025-09-17 00:58:14.865907 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-17 00:58:14.865919 | orchestrator | Wednesday 17 September 2025 00:58:03 +0000 (0:00:05.108) 0:01:43.399 *** 2025-09-17 00:58:14.865966 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:58:14.865978 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:58:14.865990 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:58:14.866002 | orchestrator | 2025-09-17 00:58:14.866044 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:58:14.866059 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-17 00:58:14.866071 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:58:14.866082 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:58:14.866093 | orchestrator | 2025-09-17 00:58:14.866104 | orchestrator | 2025-09-17 00:58:14.866115 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:58:14.866126 | orchestrator | Wednesday 17 September 2025 00:58:13 +0000 (0:00:10.034) 0:01:53.434 *** 2025-09-17 00:58:14.866136 | orchestrator | =============================================================================== 2025-09-17 00:58:14.866147 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 13.92s 2025-09-17 00:58:14.866169 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.32s 2025-09-17 00:58:14.866181 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.03s 2025-09-17 00:58:14.866192 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.76s 2025-09-17 00:58:14.866210 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.14s 2025-09-17 00:58:14.866221 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.81s 2025-09-17 00:58:14.866231 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.11s 2025-09-17 00:58:14.866242 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.29s 2025-09-17 00:58:14.866253 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.17s 2025-09-17 00:58:14.866263 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.16s 2025-09-17 00:58:14.866274 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.07s 2025-09-17 00:58:14.866284 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.50s 2025-09-17 00:58:14.866295 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.38s 2025-09-17 00:58:14.866306 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 3.21s 2025-09-17 00:58:14.866316 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.20s 2025-09-17 00:58:14.866327 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.62s 2025-09-17 00:58:14.866338 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.56s 2025-09-17 00:58:14.866348 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.44s 2025-09-17 00:58:14.866359 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.18s 2025-09-17 00:58:14.866370 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.42s 2025-09-17 00:58:14.866381 | orchestrator | 2025-09-17 00:58:14 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:14.866392 | orchestrator | 2025-09-17 00:58:14 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:14.866402 | orchestrator | 2025-09-17 00:58:14 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:14.866413 | orchestrator | 2025-09-17 00:58:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:17.890650 | orchestrator | 2025-09-17 00:58:17 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:17.890751 | orchestrator | 2025-09-17 00:58:17 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:17.891544 | orchestrator | 2025-09-17 00:58:17 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:17.892119 | orchestrator | 2025-09-17 00:58:17 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:17.892139 | orchestrator | 2025-09-17 00:58:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:20.921885 | orchestrator | 2025-09-17 00:58:20 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:20.922098 | orchestrator | 2025-09-17 00:58:20 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:20.922391 | orchestrator | 2025-09-17 00:58:20 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:20.924447 | orchestrator | 2025-09-17 00:58:20 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:20.924537 | orchestrator | 2025-09-17 00:58:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:23.950209 | orchestrator | 2025-09-17 00:58:23 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:23.951111 | orchestrator | 2025-09-17 00:58:23 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:23.952809 | orchestrator | 2025-09-17 00:58:23 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:23.953823 | orchestrator | 2025-09-17 00:58:23 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:23.954083 | orchestrator | 2025-09-17 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:26.979480 | orchestrator | 2025-09-17 00:58:26 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:26.979593 | orchestrator | 2025-09-17 00:58:26 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:26.980159 | orchestrator | 2025-09-17 00:58:26 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:26.980819 | orchestrator | 2025-09-17 00:58:26 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:26.981346 | orchestrator | 2025-09-17 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:30.031421 | orchestrator | 2025-09-17 00:58:30 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:30.031549 | orchestrator | 2025-09-17 00:58:30 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:30.034697 | orchestrator | 2025-09-17 00:58:30 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:30.040375 | orchestrator | 2025-09-17 00:58:30 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:30.040402 | orchestrator | 2025-09-17 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:33.066121 | orchestrator | 2025-09-17 00:58:33 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:33.066875 | orchestrator | 2025-09-17 00:58:33 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:33.067379 | orchestrator | 2025-09-17 00:58:33 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:33.068646 | orchestrator | 2025-09-17 00:58:33 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:33.068668 | orchestrator | 2025-09-17 00:58:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:36.104006 | orchestrator | 2025-09-17 00:58:36 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:36.104460 | orchestrator | 2025-09-17 00:58:36 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:36.105904 | orchestrator | 2025-09-17 00:58:36 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:36.107142 | orchestrator | 2025-09-17 00:58:36 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:36.107169 | orchestrator | 2025-09-17 00:58:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:39.136633 | orchestrator | 2025-09-17 00:58:39 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:39.136731 | orchestrator | 2025-09-17 00:58:39 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:39.136745 | orchestrator | 2025-09-17 00:58:39 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:39.137062 | orchestrator | 2025-09-17 00:58:39 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:39.137083 | orchestrator | 2025-09-17 00:58:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:42.176099 | orchestrator | 2025-09-17 00:58:42 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:42.177315 | orchestrator | 2025-09-17 00:58:42 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:42.179152 | orchestrator | 2025-09-17 00:58:42 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:42.181639 | orchestrator | 2025-09-17 00:58:42 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:42.181663 | orchestrator | 2025-09-17 00:58:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:45.223280 | orchestrator | 2025-09-17 00:58:45 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:45.224603 | orchestrator | 2025-09-17 00:58:45 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:45.226179 | orchestrator | 2025-09-17 00:58:45 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:45.228004 | orchestrator | 2025-09-17 00:58:45 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:45.228210 | orchestrator | 2025-09-17 00:58:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:48.275487 | orchestrator | 2025-09-17 00:58:48 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:48.275596 | orchestrator | 2025-09-17 00:58:48 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:48.276169 | orchestrator | 2025-09-17 00:58:48 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:48.276193 | orchestrator | 2025-09-17 00:58:48 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:48.276227 | orchestrator | 2025-09-17 00:58:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:51.313503 | orchestrator | 2025-09-17 00:58:51 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:51.316047 | orchestrator | 2025-09-17 00:58:51 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:51.317726 | orchestrator | 2025-09-17 00:58:51 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:51.319463 | orchestrator | 2025-09-17 00:58:51 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:51.319489 | orchestrator | 2025-09-17 00:58:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:54.349014 | orchestrator | 2025-09-17 00:58:54 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:54.349256 | orchestrator | 2025-09-17 00:58:54 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:54.349886 | orchestrator | 2025-09-17 00:58:54 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:54.350522 | orchestrator | 2025-09-17 00:58:54 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:54.350649 | orchestrator | 2025-09-17 00:58:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:58:57.380584 | orchestrator | 2025-09-17 00:58:57 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:58:57.380791 | orchestrator | 2025-09-17 00:58:57 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:58:57.381428 | orchestrator | 2025-09-17 00:58:57 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state STARTED 2025-09-17 00:58:57.382163 | orchestrator | 2025-09-17 00:58:57 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:58:57.382218 | orchestrator | 2025-09-17 00:58:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:00.419047 | orchestrator | 2025-09-17 00:59:00 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:00.419766 | orchestrator | 2025-09-17 00:59:00 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:59:00.421703 | orchestrator | 2025-09-17 00:59:00 | INFO  | Task 6b5cef38-0147-46de-a952-a10442f1ace3 is in state SUCCESS 2025-09-17 00:59:00.422282 | orchestrator | 2025-09-17 00:59:00 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:00.422521 | orchestrator | 2025-09-17 00:59:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:03.474424 | orchestrator | 2025-09-17 00:59:03 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:03.475786 | orchestrator | 2025-09-17 00:59:03 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:59:03.477674 | orchestrator | 2025-09-17 00:59:03 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:03.479226 | orchestrator | 2025-09-17 00:59:03 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:03.479268 | orchestrator | 2025-09-17 00:59:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:06.516475 | orchestrator | 2025-09-17 00:59:06 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:06.518370 | orchestrator | 2025-09-17 00:59:06 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:59:06.520449 | orchestrator | 2025-09-17 00:59:06 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:06.524076 | orchestrator | 2025-09-17 00:59:06 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:06.524529 | orchestrator | 2025-09-17 00:59:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:09.557779 | orchestrator | 2025-09-17 00:59:09 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:09.559036 | orchestrator | 2025-09-17 00:59:09 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state STARTED 2025-09-17 00:59:09.561315 | orchestrator | 2025-09-17 00:59:09 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:09.562713 | orchestrator | 2025-09-17 00:59:09 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:09.563013 | orchestrator | 2025-09-17 00:59:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:12.617003 | orchestrator | 2025-09-17 00:59:12 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:12.620456 | orchestrator | 2025-09-17 00:59:12 | INFO  | Task 8a49c6a9-e92a-4a56-ac89-c051d830b982 is in state SUCCESS 2025-09-17 00:59:12.622635 | orchestrator | 2025-09-17 00:59:12.622780 | orchestrator | 2025-09-17 00:59:12.622796 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-17 00:59:12.622808 | orchestrator | 2025-09-17 00:59:12.622983 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-17 00:59:12.622998 | orchestrator | Wednesday 17 September 2025 00:56:21 +0000 (0:00:00.166) 0:00:00.166 *** 2025-09-17 00:59:12.623010 | orchestrator | changed: [localhost] 2025-09-17 00:59:12.623022 | orchestrator | 2025-09-17 00:59:12.623033 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-17 00:59:12.623045 | orchestrator | Wednesday 17 September 2025 00:56:22 +0000 (0:00:00.981) 0:00:01.148 *** 2025-09-17 00:59:12.623081 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-17 00:59:12.623093 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2025-09-17 00:59:12.623104 | orchestrator | changed: [localhost] 2025-09-17 00:59:12.623114 | orchestrator | 2025-09-17 00:59:12.623128 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-17 00:59:12.623141 | orchestrator | Wednesday 17 September 2025 00:57:43 +0000 (0:01:21.287) 0:01:22.436 *** 2025-09-17 00:59:12.623154 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-09-17 00:59:12.623167 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2025-09-17 00:59:12.623179 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (1 retries left). 2025-09-17 00:59:12.623194 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.kernel", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.kernel.sha256"} 2025-09-17 00:59:12.623210 | orchestrator | 2025-09-17 00:59:12.623223 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:59:12.623236 | orchestrator | localhost : ok=2  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-17 00:59:12.623249 | orchestrator | 2025-09-17 00:59:12.623261 | orchestrator | 2025-09-17 00:59:12.623273 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:59:12.623285 | orchestrator | Wednesday 17 September 2025 00:58:59 +0000 (0:01:16.458) 0:02:38.894 *** 2025-09-17 00:59:12.623315 | orchestrator | =============================================================================== 2025-09-17 00:59:12.623327 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 81.29s 2025-09-17 00:59:12.623383 | orchestrator | Download ironic-agent kernel ------------------------------------------- 76.46s 2025-09-17 00:59:12.623409 | orchestrator | Ensure the destination directory exists --------------------------------- 0.98s 2025-09-17 00:59:12.623421 | orchestrator | 2025-09-17 00:59:12.623433 | orchestrator | 2025-09-17 00:59:12.623445 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:59:12.623457 | orchestrator | 2025-09-17 00:59:12.623469 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:59:12.623492 | orchestrator | Wednesday 17 September 2025 00:56:20 +0000 (0:00:00.347) 0:00:00.347 *** 2025-09-17 00:59:12.623503 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:59:12.623514 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:59:12.623525 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:59:12.623565 | orchestrator | 2025-09-17 00:59:12.623577 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:59:12.623588 | orchestrator | Wednesday 17 September 2025 00:56:20 +0000 (0:00:00.361) 0:00:00.708 *** 2025-09-17 00:59:12.623599 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-17 00:59:12.623610 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-17 00:59:12.623642 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-17 00:59:12.623653 | orchestrator | 2025-09-17 00:59:12.623664 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-17 00:59:12.623675 | orchestrator | 2025-09-17 00:59:12.623685 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-17 00:59:12.623696 | orchestrator | Wednesday 17 September 2025 00:56:21 +0000 (0:00:00.448) 0:00:01.157 *** 2025-09-17 00:59:12.623707 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:59:12.623718 | orchestrator | 2025-09-17 00:59:12.623729 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-17 00:59:12.623747 | orchestrator | Wednesday 17 September 2025 00:56:21 +0000 (0:00:00.470) 0:00:01.627 *** 2025-09-17 00:59:12.623758 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-17 00:59:12.623769 | orchestrator | 2025-09-17 00:59:12.623779 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-17 00:59:12.623790 | orchestrator | Wednesday 17 September 2025 00:56:25 +0000 (0:00:03.400) 0:00:05.027 *** 2025-09-17 00:59:12.623801 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-17 00:59:12.623812 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-17 00:59:12.623822 | orchestrator | 2025-09-17 00:59:12.623847 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-17 00:59:12.623858 | orchestrator | Wednesday 17 September 2025 00:56:33 +0000 (0:00:07.816) 0:00:12.844 *** 2025-09-17 00:59:12.623869 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 00:59:12.623879 | orchestrator | 2025-09-17 00:59:12.623907 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-17 00:59:12.623919 | orchestrator | Wednesday 17 September 2025 00:56:36 +0000 (0:00:03.450) 0:00:16.294 *** 2025-09-17 00:59:12.623948 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 00:59:12.623959 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-17 00:59:12.623970 | orchestrator | 2025-09-17 00:59:12.623981 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-17 00:59:12.623991 | orchestrator | Wednesday 17 September 2025 00:56:40 +0000 (0:00:03.815) 0:00:20.109 *** 2025-09-17 00:59:12.624002 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 00:59:12.624013 | orchestrator | 2025-09-17 00:59:12.624024 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-17 00:59:12.624034 | orchestrator | Wednesday 17 September 2025 00:56:43 +0000 (0:00:02.853) 0:00:22.963 *** 2025-09-17 00:59:12.624045 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-17 00:59:12.624056 | orchestrator | 2025-09-17 00:59:12.624067 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-17 00:59:12.624077 | orchestrator | Wednesday 17 September 2025 00:56:47 +0000 (0:00:03.762) 0:00:26.725 *** 2025-09-17 00:59:12.624092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.624107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.624127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.624146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624383 | orchestrator | 2025-09-17 00:59:12.624394 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-17 00:59:12.624405 | orchestrator | Wednesday 17 September 2025 00:56:49 +0000 (0:00:02.598) 0:00:29.323 *** 2025-09-17 00:59:12.624416 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:12.624427 | orchestrator | 2025-09-17 00:59:12.624438 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-17 00:59:12.624449 | orchestrator | Wednesday 17 September 2025 00:56:49 +0000 (0:00:00.132) 0:00:29.456 *** 2025-09-17 00:59:12.624459 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:12.624470 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:12.624481 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:12.624491 | orchestrator | 2025-09-17 00:59:12.624502 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-17 00:59:12.624513 | orchestrator | Wednesday 17 September 2025 00:56:50 +0000 (0:00:00.308) 0:00:29.765 *** 2025-09-17 00:59:12.624524 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:59:12.624535 | orchestrator | 2025-09-17 00:59:12.624545 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-17 00:59:12.624556 | orchestrator | Wednesday 17 September 2025 00:56:50 +0000 (0:00:00.771) 0:00:30.536 *** 2025-09-17 00:59:12.624567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.624586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.624598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.624621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.624839 | orchestrator | 2025-09-17 00:59:12.624850 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-17 00:59:12.624861 | orchestrator | Wednesday 17 September 2025 00:56:56 +0000 (0:00:05.857) 0:00:36.394 *** 2025-09-17 00:59:12.624872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.624891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:59:12.624902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.624914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.624982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.625660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.625682 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:12.625694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.625715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:59:12.625727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.625739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.625750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.625800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.625813 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:12.625825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.625843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:59:12.625855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.625866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.625878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.625921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.625958 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:12.625969 | orchestrator | 2025-09-17 00:59:12.625980 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-17 00:59:12.625999 | orchestrator | Wednesday 17 September 2025 00:56:57 +0000 (0:00:00.951) 0:00:37.345 *** 2025-09-17 00:59:12.626010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.626073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:59:12.626085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626246 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:12.626258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.626270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:59:12.626281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626332 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:12.626373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.626387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:59:12.626398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.626485 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:12.626496 | orchestrator | 2025-09-17 00:59:12.626507 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-17 00:59:12.626519 | orchestrator | Wednesday 17 September 2025 00:56:59 +0000 (0:00:02.145) 0:00:39.491 *** 2025-09-17 00:59:12.626561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.626575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.626587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.626598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.626862 | orchestrator | 2025-09-17 00:59:12.626875 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-17 00:59:12.626887 | orchestrator | Wednesday 17 September 2025 00:57:06 +0000 (0:00:07.160) 0:00:46.652 *** 2025-09-17 00:59:12.626953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.626969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.626982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.626996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627231 | orchestrator | 2025-09-17 00:59:12.627241 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-17 00:59:12.627252 | orchestrator | Wednesday 17 September 2025 00:57:29 +0000 (0:00:22.439) 0:01:09.091 *** 2025-09-17 00:59:12.627264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-17 00:59:12.627275 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-17 00:59:12.627285 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-17 00:59:12.627296 | orchestrator | 2025-09-17 00:59:12.627306 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-17 00:59:12.627317 | orchestrator | Wednesday 17 September 2025 00:57:35 +0000 (0:00:06.629) 0:01:15.721 *** 2025-09-17 00:59:12.627327 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-17 00:59:12.627338 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-17 00:59:12.627349 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-17 00:59:12.627359 | orchestrator | 2025-09-17 00:59:12.627375 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-17 00:59:12.627386 | orchestrator | Wednesday 17 September 2025 00:57:39 +0000 (0:00:03.908) 0:01:19.629 *** 2025-09-17 00:59:12.627403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.627416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.627427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.627445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627657 | orchestrator | 2025-09-17 00:59:12.627668 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-17 00:59:12.627678 | orchestrator | Wednesday 17 September 2025 00:57:43 +0000 (0:00:03.443) 0:01:23.072 *** 2025-09-17 00:59:12.627700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.627712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.627724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.627742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.627914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.627966 | orchestrator | 2025-09-17 00:59:12.627977 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-17 00:59:12.627988 | orchestrator | Wednesday 17 September 2025 00:57:47 +0000 (0:00:03.707) 0:01:26.780 *** 2025-09-17 00:59:12.627999 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:12.628010 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:12.628020 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:12.628031 | orchestrator | 2025-09-17 00:59:12.628041 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-17 00:59:12.628052 | orchestrator | Wednesday 17 September 2025 00:57:47 +0000 (0:00:00.341) 0:01:27.121 *** 2025-09-17 00:59:12.628074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.628086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:59:12.628104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628150 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:12.628166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.628183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:59:12.628201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628246 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:12.628258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 00:59:12.628278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 00:59:12.628297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 00:59:12.628342 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:12.628353 | orchestrator | 2025-09-17 00:59:12.628364 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-17 00:59:12.628375 | orchestrator | Wednesday 17 September 2025 00:57:48 +0000 (0:00:01.329) 0:01:28.451 *** 2025-09-17 00:59:12.628386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.628408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.628430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 00:59:12.628441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 00:59:12.628644 | orchestrator | 2025-09-17 00:59:12.628655 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-17 00:59:12.628666 | orchestrator | Wednesday 17 September 2025 00:57:53 +0000 (0:00:04.999) 0:01:33.450 *** 2025-09-17 00:59:12.628677 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:12.628687 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:12.628698 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:12.628709 | orchestrator | 2025-09-17 00:59:12.628720 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-17 00:59:12.628730 | orchestrator | Wednesday 17 September 2025 00:57:54 +0000 (0:00:00.363) 0:01:33.814 *** 2025-09-17 00:59:12.628741 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-17 00:59:12.628752 | orchestrator | 2025-09-17 00:59:12.628763 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-17 00:59:12.628774 | orchestrator | Wednesday 17 September 2025 00:57:56 +0000 (0:00:02.258) 0:01:36.072 *** 2025-09-17 00:59:12.628784 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 00:59:12.628795 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-17 00:59:12.628812 | orchestrator | 2025-09-17 00:59:12.628823 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-17 00:59:12.628833 | orchestrator | Wednesday 17 September 2025 00:57:58 +0000 (0:00:02.348) 0:01:38.421 *** 2025-09-17 00:59:12.628844 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:12.628854 | orchestrator | 2025-09-17 00:59:12.628865 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-17 00:59:12.628876 | orchestrator | Wednesday 17 September 2025 00:58:13 +0000 (0:00:14.515) 0:01:52.936 *** 2025-09-17 00:59:12.628886 | orchestrator | 2025-09-17 00:59:12.628897 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-17 00:59:12.628908 | orchestrator | Wednesday 17 September 2025 00:58:13 +0000 (0:00:00.350) 0:01:53.287 *** 2025-09-17 00:59:12.628918 | orchestrator | 2025-09-17 00:59:12.628989 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-17 00:59:12.629000 | orchestrator | Wednesday 17 September 2025 00:58:13 +0000 (0:00:00.284) 0:01:53.572 *** 2025-09-17 00:59:12.629011 | orchestrator | 2025-09-17 00:59:12.629028 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-17 00:59:12.629039 | orchestrator | Wednesday 17 September 2025 00:58:14 +0000 (0:00:00.209) 0:01:53.782 *** 2025-09-17 00:59:12.629049 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:59:12.629060 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:59:12.629071 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:12.629081 | orchestrator | 2025-09-17 00:59:12.629098 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-17 00:59:12.629110 | orchestrator | Wednesday 17 September 2025 00:58:24 +0000 (0:00:10.529) 0:02:04.311 *** 2025-09-17 00:59:12.629120 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:12.629131 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:59:12.629142 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:59:12.629152 | orchestrator | 2025-09-17 00:59:12.629163 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-17 00:59:12.629174 | orchestrator | Wednesday 17 September 2025 00:58:31 +0000 (0:00:06.989) 0:02:11.301 *** 2025-09-17 00:59:12.629184 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:12.629195 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:59:12.629205 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:59:12.629216 | orchestrator | 2025-09-17 00:59:12.629227 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-17 00:59:12.629237 | orchestrator | Wednesday 17 September 2025 00:58:37 +0000 (0:00:06.009) 0:02:17.310 *** 2025-09-17 00:59:12.629247 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:12.629256 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:59:12.629266 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:59:12.629275 | orchestrator | 2025-09-17 00:59:12.629285 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-17 00:59:12.629295 | orchestrator | Wednesday 17 September 2025 00:58:47 +0000 (0:00:10.076) 0:02:27.387 *** 2025-09-17 00:59:12.629304 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:12.629314 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:59:12.629323 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:59:12.629332 | orchestrator | 2025-09-17 00:59:12.629342 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-17 00:59:12.629352 | orchestrator | Wednesday 17 September 2025 00:58:53 +0000 (0:00:05.336) 0:02:32.724 *** 2025-09-17 00:59:12.629361 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:59:12.629371 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:59:12.629380 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:12.629389 | orchestrator | 2025-09-17 00:59:12.629399 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-17 00:59:12.629409 | orchestrator | Wednesday 17 September 2025 00:59:01 +0000 (0:00:08.808) 0:02:41.532 *** 2025-09-17 00:59:12.629418 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:12.629434 | orchestrator | 2025-09-17 00:59:12.629444 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:59:12.629454 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-17 00:59:12.629464 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:59:12.629474 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:59:12.629484 | orchestrator | 2025-09-17 00:59:12.629493 | orchestrator | 2025-09-17 00:59:12.629503 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:59:12.629512 | orchestrator | Wednesday 17 September 2025 00:59:09 +0000 (0:00:07.723) 0:02:49.256 *** 2025-09-17 00:59:12.629522 | orchestrator | =============================================================================== 2025-09-17 00:59:12.629532 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.44s 2025-09-17 00:59:12.629541 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.52s 2025-09-17 00:59:12.629551 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.53s 2025-09-17 00:59:12.629560 | orchestrator | designate : Restart designate-producer container ----------------------- 10.08s 2025-09-17 00:59:12.629569 | orchestrator | designate : Restart designate-worker container -------------------------- 8.81s 2025-09-17 00:59:12.629579 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.82s 2025-09-17 00:59:12.629589 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.72s 2025-09-17 00:59:12.629598 | orchestrator | designate : Copying over config.json files for services ----------------- 7.16s 2025-09-17 00:59:12.629607 | orchestrator | designate : Restart designate-api container ----------------------------- 6.99s 2025-09-17 00:59:12.629617 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.63s 2025-09-17 00:59:12.629626 | orchestrator | designate : Restart designate-central container ------------------------- 6.01s 2025-09-17 00:59:12.629636 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.86s 2025-09-17 00:59:12.629646 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.34s 2025-09-17 00:59:12.629655 | orchestrator | designate : Check designate containers ---------------------------------- 5.00s 2025-09-17 00:59:12.629664 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.91s 2025-09-17 00:59:12.629674 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.82s 2025-09-17 00:59:12.629683 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.76s 2025-09-17 00:59:12.629693 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.71s 2025-09-17 00:59:12.629707 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.45s 2025-09-17 00:59:12.629717 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.45s 2025-09-17 00:59:12.629726 | orchestrator | 2025-09-17 00:59:12 | INFO  | Task 5e1160c4-c1e0-4529-af72-de11175b2ca5 is in state STARTED 2025-09-17 00:59:12.629740 | orchestrator | 2025-09-17 00:59:12 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:12.629750 | orchestrator | 2025-09-17 00:59:12 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:12.629760 | orchestrator | 2025-09-17 00:59:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:15.663094 | orchestrator | 2025-09-17 00:59:15 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:15.663195 | orchestrator | 2025-09-17 00:59:15 | INFO  | Task 5e1160c4-c1e0-4529-af72-de11175b2ca5 is in state SUCCESS 2025-09-17 00:59:15.664755 | orchestrator | 2025-09-17 00:59:15 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:15.665595 | orchestrator | 2025-09-17 00:59:15 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:15.665838 | orchestrator | 2025-09-17 00:59:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:18.718397 | orchestrator | 2025-09-17 00:59:18 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:18.719606 | orchestrator | 2025-09-17 00:59:18 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:18.720067 | orchestrator | 2025-09-17 00:59:18 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:18.720691 | orchestrator | 2025-09-17 00:59:18 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:18.720775 | orchestrator | 2025-09-17 00:59:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:21.752684 | orchestrator | 2025-09-17 00:59:21 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:21.753147 | orchestrator | 2025-09-17 00:59:21 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:21.755381 | orchestrator | 2025-09-17 00:59:21 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:21.756550 | orchestrator | 2025-09-17 00:59:21 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:21.756990 | orchestrator | 2025-09-17 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:24.795122 | orchestrator | 2025-09-17 00:59:24 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:24.796426 | orchestrator | 2025-09-17 00:59:24 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:24.797154 | orchestrator | 2025-09-17 00:59:24 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:24.797678 | orchestrator | 2025-09-17 00:59:24 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:24.797773 | orchestrator | 2025-09-17 00:59:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:27.827590 | orchestrator | 2025-09-17 00:59:27 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:27.828391 | orchestrator | 2025-09-17 00:59:27 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:27.829193 | orchestrator | 2025-09-17 00:59:27 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:27.830120 | orchestrator | 2025-09-17 00:59:27 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:27.830146 | orchestrator | 2025-09-17 00:59:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:30.863584 | orchestrator | 2025-09-17 00:59:30 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state STARTED 2025-09-17 00:59:30.865725 | orchestrator | 2025-09-17 00:59:30 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:30.867236 | orchestrator | 2025-09-17 00:59:30 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:30.868877 | orchestrator | 2025-09-17 00:59:30 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:30.868996 | orchestrator | 2025-09-17 00:59:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:33.908166 | orchestrator | 2025-09-17 00:59:33 | INFO  | Task ecb85e37-6558-4168-80aa-9a596e2525bc is in state SUCCESS 2025-09-17 00:59:33.909119 | orchestrator | 2025-09-17 00:59:33.909152 | orchestrator | 2025-09-17 00:59:33.909164 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:59:33.909175 | orchestrator | 2025-09-17 00:59:33.909186 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:59:33.909196 | orchestrator | Wednesday 17 September 2025 00:59:13 +0000 (0:00:00.136) 0:00:00.136 *** 2025-09-17 00:59:33.909206 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:59:33.909217 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:59:33.909227 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:59:33.909237 | orchestrator | 2025-09-17 00:59:33.909246 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:59:33.909256 | orchestrator | Wednesday 17 September 2025 00:59:13 +0000 (0:00:00.230) 0:00:00.367 *** 2025-09-17 00:59:33.909281 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-17 00:59:33.909292 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-17 00:59:33.909312 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-17 00:59:33.909322 | orchestrator | 2025-09-17 00:59:33.909332 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-17 00:59:33.909341 | orchestrator | 2025-09-17 00:59:33.909351 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-17 00:59:33.909361 | orchestrator | Wednesday 17 September 2025 00:59:14 +0000 (0:00:00.500) 0:00:00.867 *** 2025-09-17 00:59:33.909371 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:59:33.909380 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:59:33.909390 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:59:33.909399 | orchestrator | 2025-09-17 00:59:33.909409 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:59:33.909420 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:59:33.909433 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:59:33.909442 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 00:59:33.909452 | orchestrator | 2025-09-17 00:59:33.909462 | orchestrator | 2025-09-17 00:59:33.909472 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:59:33.909481 | orchestrator | Wednesday 17 September 2025 00:59:14 +0000 (0:00:00.624) 0:00:01.492 *** 2025-09-17 00:59:33.909491 | orchestrator | =============================================================================== 2025-09-17 00:59:33.909501 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.63s 2025-09-17 00:59:33.909510 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-09-17 00:59:33.909520 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.23s 2025-09-17 00:59:33.909530 | orchestrator | 2025-09-17 00:59:33.909539 | orchestrator | 2025-09-17 00:59:33.909549 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 00:59:33.909558 | orchestrator | 2025-09-17 00:59:33.909568 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 00:59:33.909659 | orchestrator | Wednesday 17 September 2025 00:58:19 +0000 (0:00:00.214) 0:00:00.214 *** 2025-09-17 00:59:33.909671 | orchestrator | ok: [testbed-node-0] 2025-09-17 00:59:33.909681 | orchestrator | ok: [testbed-node-1] 2025-09-17 00:59:33.909690 | orchestrator | ok: [testbed-node-2] 2025-09-17 00:59:33.909700 | orchestrator | 2025-09-17 00:59:33.909709 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 00:59:33.909719 | orchestrator | Wednesday 17 September 2025 00:58:20 +0000 (0:00:00.258) 0:00:00.473 *** 2025-09-17 00:59:33.909729 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-17 00:59:33.909764 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-17 00:59:33.909774 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-17 00:59:33.909784 | orchestrator | 2025-09-17 00:59:33.909794 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-17 00:59:33.909803 | orchestrator | 2025-09-17 00:59:33.909813 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-17 00:59:33.909822 | orchestrator | Wednesday 17 September 2025 00:58:20 +0000 (0:00:00.320) 0:00:00.794 *** 2025-09-17 00:59:33.909832 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:59:33.909842 | orchestrator | 2025-09-17 00:59:33.909852 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-17 00:59:33.909861 | orchestrator | Wednesday 17 September 2025 00:58:21 +0000 (0:00:00.844) 0:00:01.638 *** 2025-09-17 00:59:33.909871 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-17 00:59:33.909880 | orchestrator | 2025-09-17 00:59:33.909890 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-17 00:59:33.909899 | orchestrator | Wednesday 17 September 2025 00:58:24 +0000 (0:00:03.598) 0:00:05.237 *** 2025-09-17 00:59:33.909909 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-17 00:59:33.909919 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-17 00:59:33.909948 | orchestrator | 2025-09-17 00:59:33.909958 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-17 00:59:33.909967 | orchestrator | Wednesday 17 September 2025 00:58:31 +0000 (0:00:06.940) 0:00:12.177 *** 2025-09-17 00:59:33.909977 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 00:59:33.909986 | orchestrator | 2025-09-17 00:59:33.909996 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-17 00:59:33.910006 | orchestrator | Wednesday 17 September 2025 00:58:35 +0000 (0:00:03.632) 0:00:15.809 *** 2025-09-17 00:59:33.910087 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 00:59:33.910099 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-17 00:59:33.910109 | orchestrator | 2025-09-17 00:59:33.910118 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-17 00:59:33.910128 | orchestrator | Wednesday 17 September 2025 00:58:39 +0000 (0:00:04.168) 0:00:19.978 *** 2025-09-17 00:59:33.910137 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 00:59:33.910147 | orchestrator | 2025-09-17 00:59:33.910157 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-17 00:59:33.910166 | orchestrator | Wednesday 17 September 2025 00:58:43 +0000 (0:00:03.620) 0:00:23.598 *** 2025-09-17 00:59:33.910175 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-17 00:59:33.910185 | orchestrator | 2025-09-17 00:59:33.910195 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-17 00:59:33.910204 | orchestrator | Wednesday 17 September 2025 00:58:47 +0000 (0:00:04.478) 0:00:28.077 *** 2025-09-17 00:59:33.910213 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:33.910223 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:33.910233 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:33.910323 | orchestrator | 2025-09-17 00:59:33.910337 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-17 00:59:33.910349 | orchestrator | Wednesday 17 September 2025 00:58:48 +0000 (0:00:00.282) 0:00:28.360 *** 2025-09-17 00:59:33.910364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.910393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.910405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.910417 | orchestrator | 2025-09-17 00:59:33.910428 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-17 00:59:33.910439 | orchestrator | Wednesday 17 September 2025 00:58:49 +0000 (0:00:01.102) 0:00:29.462 *** 2025-09-17 00:59:33.910450 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:33.910460 | orchestrator | 2025-09-17 00:59:33.910471 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-17 00:59:33.910482 | orchestrator | Wednesday 17 September 2025 00:58:49 +0000 (0:00:00.203) 0:00:29.666 *** 2025-09-17 00:59:33.910497 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:33.910515 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:33.910527 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:33.910538 | orchestrator | 2025-09-17 00:59:33.910549 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-17 00:59:33.910560 | orchestrator | Wednesday 17 September 2025 00:58:49 +0000 (0:00:00.486) 0:00:30.152 *** 2025-09-17 00:59:33.910571 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 00:59:33.910582 | orchestrator | 2025-09-17 00:59:33.910593 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-17 00:59:33.910604 | orchestrator | Wednesday 17 September 2025 00:58:50 +0000 (0:00:00.457) 0:00:30.609 *** 2025-09-17 00:59:33.910616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.910634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.910645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.910655 | orchestrator | 2025-09-17 00:59:33.910665 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-17 00:59:33.910675 | orchestrator | Wednesday 17 September 2025 00:58:51 +0000 (0:00:01.450) 0:00:32.060 *** 2025-09-17 00:59:33.910697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:59:33.910708 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:33.910718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:59:33.910734 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:33.910744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:59:33.910754 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:33.910764 | orchestrator | 2025-09-17 00:59:33.910774 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-17 00:59:33.910783 | orchestrator | Wednesday 17 September 2025 00:58:52 +0000 (0:00:00.806) 0:00:32.867 *** 2025-09-17 00:59:33.910793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:59:33.910803 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:33.910823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:59:33.910834 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:33.910844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:59:33.910860 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:33.910869 | orchestrator | 2025-09-17 00:59:33.910879 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-17 00:59:33.910889 | orchestrator | Wednesday 17 September 2025 00:58:53 +0000 (0:00:00.639) 0:00:33.507 *** 2025-09-17 00:59:33.910898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.910909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.910919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.910998 | orchestrator | 2025-09-17 00:59:33.911009 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-17 00:59:33.911018 | orchestrator | Wednesday 17 September 2025 00:58:54 +0000 (0:00:01.520) 0:00:35.027 *** 2025-09-17 00:59:33.911046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.911056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.911064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.911072 | orchestrator | 2025-09-17 00:59:33.911080 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-17 00:59:33.911088 | orchestrator | Wednesday 17 September 2025 00:58:56 +0000 (0:00:02.283) 0:00:37.311 *** 2025-09-17 00:59:33.911096 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-17 00:59:33.911104 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-17 00:59:33.911112 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-17 00:59:33.911190 | orchestrator | 2025-09-17 00:59:33.911199 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-17 00:59:33.911207 | orchestrator | Wednesday 17 September 2025 00:58:58 +0000 (0:00:01.641) 0:00:38.952 *** 2025-09-17 00:59:33.911215 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:33.911223 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:59:33.911231 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:59:33.911239 | orchestrator | 2025-09-17 00:59:33.911247 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-17 00:59:33.911261 | orchestrator | Wednesday 17 September 2025 00:58:59 +0000 (0:00:01.275) 0:00:40.228 *** 2025-09-17 00:59:33.911285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:59:33.911295 | orchestrator | skipping: [testbed-node-0] 2025-09-17 00:59:33.911303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:59:33.911311 | orchestrator | skipping: [testbed-node-1] 2025-09-17 00:59:33.911319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 00:59:33.911328 | orchestrator | skipping: [testbed-node-2] 2025-09-17 00:59:33.911335 | orchestrator | 2025-09-17 00:59:33.911343 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-17 00:59:33.911351 | orchestrator | Wednesday 17 September 2025 00:59:00 +0000 (0:00:00.573) 0:00:40.802 *** 2025-09-17 00:59:33.911359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.911383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.911393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 00:59:33.911401 | orchestrator | 2025-09-17 00:59:33.911409 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-17 00:59:33.911417 | orchestrator | Wednesday 17 September 2025 00:59:01 +0000 (0:00:01.066) 0:00:41.868 *** 2025-09-17 00:59:33.911425 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:33.911432 | orchestrator | 2025-09-17 00:59:33.911440 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-17 00:59:33.911448 | orchestrator | Wednesday 17 September 2025 00:59:04 +0000 (0:00:02.804) 0:00:44.673 *** 2025-09-17 00:59:33.911456 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:33.911464 | orchestrator | 2025-09-17 00:59:33.911471 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-17 00:59:33.911479 | orchestrator | Wednesday 17 September 2025 00:59:06 +0000 (0:00:02.546) 0:00:47.219 *** 2025-09-17 00:59:33.911487 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:33.911495 | orchestrator | 2025-09-17 00:59:33.911503 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-17 00:59:33.911510 | orchestrator | Wednesday 17 September 2025 00:59:22 +0000 (0:00:15.172) 0:01:02.392 *** 2025-09-17 00:59:33.911518 | orchestrator | 2025-09-17 00:59:33.911526 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-17 00:59:33.911534 | orchestrator | Wednesday 17 September 2025 00:59:22 +0000 (0:00:00.067) 0:01:02.459 *** 2025-09-17 00:59:33.911541 | orchestrator | 2025-09-17 00:59:33.911549 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-17 00:59:33.911557 | orchestrator | Wednesday 17 September 2025 00:59:22 +0000 (0:00:00.060) 0:01:02.519 *** 2025-09-17 00:59:33.911565 | orchestrator | 2025-09-17 00:59:33.911572 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-17 00:59:33.911580 | orchestrator | Wednesday 17 September 2025 00:59:22 +0000 (0:00:00.061) 0:01:02.581 *** 2025-09-17 00:59:33.911588 | orchestrator | changed: [testbed-node-0] 2025-09-17 00:59:33.911596 | orchestrator | changed: [testbed-node-1] 2025-09-17 00:59:33.911610 | orchestrator | changed: [testbed-node-2] 2025-09-17 00:59:33.911618 | orchestrator | 2025-09-17 00:59:33.911625 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 00:59:33.911634 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 00:59:33.911644 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 00:59:33.911652 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 00:59:33.911660 | orchestrator | 2025-09-17 00:59:33.911668 | orchestrator | 2025-09-17 00:59:33.911676 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 00:59:33.911684 | orchestrator | Wednesday 17 September 2025 00:59:32 +0000 (0:00:10.039) 0:01:12.620 *** 2025-09-17 00:59:33.911691 | orchestrator | =============================================================================== 2025-09-17 00:59:33.911699 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.17s 2025-09-17 00:59:33.911765 | orchestrator | placement : Restart placement-api container ---------------------------- 10.04s 2025-09-17 00:59:33.911774 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.93s 2025-09-17 00:59:33.911782 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.48s 2025-09-17 00:59:33.911790 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.17s 2025-09-17 00:59:33.911798 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.63s 2025-09-17 00:59:33.911806 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.62s 2025-09-17 00:59:33.911814 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.60s 2025-09-17 00:59:33.911822 | orchestrator | placement : Creating placement databases -------------------------------- 2.80s 2025-09-17 00:59:33.911830 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.55s 2025-09-17 00:59:33.911896 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.28s 2025-09-17 00:59:33.911911 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.64s 2025-09-17 00:59:33.911919 | orchestrator | placement : Copying over config.json files for services ----------------- 1.52s 2025-09-17 00:59:33.911943 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.45s 2025-09-17 00:59:33.911952 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.28s 2025-09-17 00:59:33.911960 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.10s 2025-09-17 00:59:33.911967 | orchestrator | placement : Check placement containers ---------------------------------- 1.07s 2025-09-17 00:59:33.911975 | orchestrator | placement : include_tasks ----------------------------------------------- 0.84s 2025-09-17 00:59:33.911983 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.81s 2025-09-17 00:59:33.911991 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.64s 2025-09-17 00:59:33.911999 | orchestrator | 2025-09-17 00:59:33 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:33.912007 | orchestrator | 2025-09-17 00:59:33 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:33.912015 | orchestrator | 2025-09-17 00:59:33 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 00:59:33.912023 | orchestrator | 2025-09-17 00:59:33 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:33.912031 | orchestrator | 2025-09-17 00:59:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:36.939677 | orchestrator | 2025-09-17 00:59:36 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:36.939783 | orchestrator | 2025-09-17 00:59:36 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:36.939797 | orchestrator | 2025-09-17 00:59:36 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 00:59:36.939809 | orchestrator | 2025-09-17 00:59:36 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:36.939821 | orchestrator | 2025-09-17 00:59:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:39.963267 | orchestrator | 2025-09-17 00:59:39 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:39.966357 | orchestrator | 2025-09-17 00:59:39 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:39.968408 | orchestrator | 2025-09-17 00:59:39 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 00:59:39.969677 | orchestrator | 2025-09-17 00:59:39 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:39.969700 | orchestrator | 2025-09-17 00:59:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:43.001518 | orchestrator | 2025-09-17 00:59:42 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:43.001625 | orchestrator | 2025-09-17 00:59:43 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:43.002375 | orchestrator | 2025-09-17 00:59:43 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 00:59:43.002873 | orchestrator | 2025-09-17 00:59:43 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:43.003114 | orchestrator | 2025-09-17 00:59:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:46.135687 | orchestrator | 2025-09-17 00:59:46 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:46.135790 | orchestrator | 2025-09-17 00:59:46 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:46.135804 | orchestrator | 2025-09-17 00:59:46 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 00:59:46.135815 | orchestrator | 2025-09-17 00:59:46 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:46.135827 | orchestrator | 2025-09-17 00:59:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:49.141626 | orchestrator | 2025-09-17 00:59:49 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:49.141734 | orchestrator | 2025-09-17 00:59:49 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:49.141750 | orchestrator | 2025-09-17 00:59:49 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 00:59:49.141780 | orchestrator | 2025-09-17 00:59:49 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:49.141792 | orchestrator | 2025-09-17 00:59:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:52.143297 | orchestrator | 2025-09-17 00:59:52 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:52.143838 | orchestrator | 2025-09-17 00:59:52 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:52.144252 | orchestrator | 2025-09-17 00:59:52 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 00:59:52.145055 | orchestrator | 2025-09-17 00:59:52 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:52.145080 | orchestrator | 2025-09-17 00:59:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:55.172153 | orchestrator | 2025-09-17 00:59:55 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:55.172513 | orchestrator | 2025-09-17 00:59:55 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:55.174238 | orchestrator | 2025-09-17 00:59:55 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 00:59:55.174272 | orchestrator | 2025-09-17 00:59:55 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:55.174286 | orchestrator | 2025-09-17 00:59:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 00:59:58.208132 | orchestrator | 2025-09-17 00:59:58 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 00:59:58.213682 | orchestrator | 2025-09-17 00:59:58 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 00:59:58.214085 | orchestrator | 2025-09-17 00:59:58 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 00:59:58.215265 | orchestrator | 2025-09-17 00:59:58 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 00:59:58.215362 | orchestrator | 2025-09-17 00:59:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:01.257999 | orchestrator | 2025-09-17 01:00:01 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:01.259671 | orchestrator | 2025-09-17 01:00:01 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:01.262352 | orchestrator | 2025-09-17 01:00:01 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 01:00:01.266381 | orchestrator | 2025-09-17 01:00:01 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:01.266410 | orchestrator | 2025-09-17 01:00:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:04.318227 | orchestrator | 2025-09-17 01:00:04 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:04.322651 | orchestrator | 2025-09-17 01:00:04 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:04.325838 | orchestrator | 2025-09-17 01:00:04 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 01:00:04.328021 | orchestrator | 2025-09-17 01:00:04 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:04.328586 | orchestrator | 2025-09-17 01:00:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:07.373309 | orchestrator | 2025-09-17 01:00:07 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:07.374176 | orchestrator | 2025-09-17 01:00:07 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:07.374210 | orchestrator | 2025-09-17 01:00:07 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 01:00:07.375010 | orchestrator | 2025-09-17 01:00:07 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:07.375033 | orchestrator | 2025-09-17 01:00:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:10.407469 | orchestrator | 2025-09-17 01:00:10 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:10.408324 | orchestrator | 2025-09-17 01:00:10 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:10.408383 | orchestrator | 2025-09-17 01:00:10 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state STARTED 2025-09-17 01:00:10.409073 | orchestrator | 2025-09-17 01:00:10 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:10.409110 | orchestrator | 2025-09-17 01:00:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:13.458900 | orchestrator | 2025-09-17 01:00:13 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:13.460896 | orchestrator | 2025-09-17 01:00:13 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:13.462537 | orchestrator | 2025-09-17 01:00:13 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:13.464044 | orchestrator | 2025-09-17 01:00:13 | INFO  | Task 1880fb0d-f39c-4f34-b769-8fc546383a29 is in state SUCCESS 2025-09-17 01:00:13.465400 | orchestrator | 2025-09-17 01:00:13 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:13.465532 | orchestrator | 2025-09-17 01:00:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:16.517189 | orchestrator | 2025-09-17 01:00:16 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:16.519210 | orchestrator | 2025-09-17 01:00:16 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:16.520878 | orchestrator | 2025-09-17 01:00:16 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:16.522511 | orchestrator | 2025-09-17 01:00:16 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:16.522543 | orchestrator | 2025-09-17 01:00:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:19.566801 | orchestrator | 2025-09-17 01:00:19 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:19.568551 | orchestrator | 2025-09-17 01:00:19 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:19.570416 | orchestrator | 2025-09-17 01:00:19 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:19.572529 | orchestrator | 2025-09-17 01:00:19 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:19.572546 | orchestrator | 2025-09-17 01:00:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:22.612476 | orchestrator | 2025-09-17 01:00:22 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:22.614085 | orchestrator | 2025-09-17 01:00:22 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:22.616362 | orchestrator | 2025-09-17 01:00:22 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:22.618687 | orchestrator | 2025-09-17 01:00:22 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:22.618735 | orchestrator | 2025-09-17 01:00:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:25.658315 | orchestrator | 2025-09-17 01:00:25 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:25.659585 | orchestrator | 2025-09-17 01:00:25 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:25.661858 | orchestrator | 2025-09-17 01:00:25 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:25.663401 | orchestrator | 2025-09-17 01:00:25 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:25.664134 | orchestrator | 2025-09-17 01:00:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:28.706199 | orchestrator | 2025-09-17 01:00:28 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:28.706331 | orchestrator | 2025-09-17 01:00:28 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:28.706736 | orchestrator | 2025-09-17 01:00:28 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:28.708579 | orchestrator | 2025-09-17 01:00:28 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:28.708601 | orchestrator | 2025-09-17 01:00:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:31.816726 | orchestrator | 2025-09-17 01:00:31 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:31.816832 | orchestrator | 2025-09-17 01:00:31 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:31.816845 | orchestrator | 2025-09-17 01:00:31 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:31.816856 | orchestrator | 2025-09-17 01:00:31 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:31.816886 | orchestrator | 2025-09-17 01:00:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:34.771407 | orchestrator | 2025-09-17 01:00:34 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:34.771509 | orchestrator | 2025-09-17 01:00:34 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:34.771963 | orchestrator | 2025-09-17 01:00:34 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:34.773766 | orchestrator | 2025-09-17 01:00:34 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:34.773788 | orchestrator | 2025-09-17 01:00:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:37.809649 | orchestrator | 2025-09-17 01:00:37 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:37.810489 | orchestrator | 2025-09-17 01:00:37 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:37.812400 | orchestrator | 2025-09-17 01:00:37 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:37.814266 | orchestrator | 2025-09-17 01:00:37 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:37.814370 | orchestrator | 2025-09-17 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:40.857364 | orchestrator | 2025-09-17 01:00:40 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:40.857479 | orchestrator | 2025-09-17 01:00:40 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:40.857812 | orchestrator | 2025-09-17 01:00:40 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:40.860073 | orchestrator | 2025-09-17 01:00:40 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:40.860151 | orchestrator | 2025-09-17 01:00:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:43.892202 | orchestrator | 2025-09-17 01:00:43 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:43.892502 | orchestrator | 2025-09-17 01:00:43 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:43.893169 | orchestrator | 2025-09-17 01:00:43 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:43.894157 | orchestrator | 2025-09-17 01:00:43 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:43.894186 | orchestrator | 2025-09-17 01:00:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:46.926262 | orchestrator | 2025-09-17 01:00:46 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:46.926356 | orchestrator | 2025-09-17 01:00:46 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:46.926791 | orchestrator | 2025-09-17 01:00:46 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state STARTED 2025-09-17 01:00:46.927505 | orchestrator | 2025-09-17 01:00:46 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:46.927526 | orchestrator | 2025-09-17 01:00:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:49.966788 | orchestrator | 2025-09-17 01:00:49 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:49.966912 | orchestrator | 2025-09-17 01:00:49 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:49.969326 | orchestrator | 2025-09-17 01:00:49 | INFO  | Task 4d3d6529-91c2-454a-9d2f-954474819c0a is in state SUCCESS 2025-09-17 01:00:49.971494 | orchestrator | 2025-09-17 01:00:49.972100 | orchestrator | 2025-09-17 01:00:49.972123 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 01:00:49.972133 | orchestrator | 2025-09-17 01:00:49.972143 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 01:00:49.972154 | orchestrator | Wednesday 17 September 2025 00:59:36 +0000 (0:00:00.276) 0:00:00.276 *** 2025-09-17 01:00:49.972164 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:00:49.972175 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:00:49.972184 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:00:49.972194 | orchestrator | ok: [testbed-manager] 2025-09-17 01:00:49.972204 | orchestrator | ok: [testbed-node-3] 2025-09-17 01:00:49.972213 | orchestrator | ok: [testbed-node-4] 2025-09-17 01:00:49.972223 | orchestrator | ok: [testbed-node-5] 2025-09-17 01:00:49.972232 | orchestrator | 2025-09-17 01:00:49.972242 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 01:00:49.972251 | orchestrator | Wednesday 17 September 2025 00:59:37 +0000 (0:00:00.706) 0:00:00.983 *** 2025-09-17 01:00:49.972261 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-17 01:00:49.972271 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-17 01:00:49.972296 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-17 01:00:49.972306 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-17 01:00:49.972316 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-17 01:00:49.972325 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-17 01:00:49.972334 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-17 01:00:49.972344 | orchestrator | 2025-09-17 01:00:49.972353 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-17 01:00:49.972363 | orchestrator | 2025-09-17 01:00:49.972372 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-17 01:00:49.972382 | orchestrator | Wednesday 17 September 2025 00:59:38 +0000 (0:00:00.704) 0:00:01.687 *** 2025-09-17 01:00:49.972392 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 01:00:49.972403 | orchestrator | 2025-09-17 01:00:49.972413 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-17 01:00:49.972422 | orchestrator | Wednesday 17 September 2025 00:59:39 +0000 (0:00:01.444) 0:00:03.132 *** 2025-09-17 01:00:49.972453 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-17 01:00:49.972463 | orchestrator | 2025-09-17 01:00:49.972473 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-17 01:00:49.972483 | orchestrator | Wednesday 17 September 2025 00:59:43 +0000 (0:00:03.752) 0:00:06.884 *** 2025-09-17 01:00:49.972494 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-17 01:00:49.972505 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-17 01:00:49.972515 | orchestrator | 2025-09-17 01:00:49.972525 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-17 01:00:49.972534 | orchestrator | Wednesday 17 September 2025 00:59:50 +0000 (0:00:06.902) 0:00:13.787 *** 2025-09-17 01:00:49.972544 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 01:00:49.972554 | orchestrator | 2025-09-17 01:00:49.972563 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-17 01:00:49.972573 | orchestrator | Wednesday 17 September 2025 00:59:53 +0000 (0:00:03.423) 0:00:17.211 *** 2025-09-17 01:00:49.972582 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 01:00:49.972592 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-17 01:00:49.972601 | orchestrator | 2025-09-17 01:00:49.972610 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-17 01:00:49.972620 | orchestrator | Wednesday 17 September 2025 00:59:58 +0000 (0:00:04.263) 0:00:21.474 *** 2025-09-17 01:00:49.972629 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 01:00:49.972639 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-17 01:00:49.972649 | orchestrator | 2025-09-17 01:00:49.972658 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-17 01:00:49.972668 | orchestrator | Wednesday 17 September 2025 01:00:05 +0000 (0:00:06.982) 0:00:28.457 *** 2025-09-17 01:00:49.972677 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-17 01:00:49.972687 | orchestrator | 2025-09-17 01:00:49.972696 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 01:00:49.972706 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:00:49.972716 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:00:49.972728 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:00:49.972739 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:00:49.972752 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:00:49.972801 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:00:49.972815 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:00:49.972827 | orchestrator | 2025-09-17 01:00:49.972839 | orchestrator | 2025-09-17 01:00:49.972850 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 01:00:49.972861 | orchestrator | Wednesday 17 September 2025 01:00:09 +0000 (0:00:04.905) 0:00:33.363 *** 2025-09-17 01:00:49.972872 | orchestrator | =============================================================================== 2025-09-17 01:00:49.972883 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.98s 2025-09-17 01:00:49.972901 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.90s 2025-09-17 01:00:49.972912 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.91s 2025-09-17 01:00:49.972923 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.26s 2025-09-17 01:00:49.972957 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.75s 2025-09-17 01:00:49.972969 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.42s 2025-09-17 01:00:49.972980 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.44s 2025-09-17 01:00:49.972990 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2025-09-17 01:00:49.972999 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-09-17 01:00:49.973008 | orchestrator | 2025-09-17 01:00:49.973018 | orchestrator | 2025-09-17 01:00:49.973027 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 01:00:49.973037 | orchestrator | 2025-09-17 01:00:49.973046 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 01:00:49.973056 | orchestrator | Wednesday 17 September 2025 00:56:20 +0000 (0:00:00.252) 0:00:00.252 *** 2025-09-17 01:00:49.973065 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:00:49.973075 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:00:49.973084 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:00:49.973094 | orchestrator | ok: [testbed-node-3] 2025-09-17 01:00:49.973103 | orchestrator | ok: [testbed-node-4] 2025-09-17 01:00:49.973112 | orchestrator | ok: [testbed-node-5] 2025-09-17 01:00:49.973122 | orchestrator | 2025-09-17 01:00:49.973131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 01:00:49.973141 | orchestrator | Wednesday 17 September 2025 00:56:21 +0000 (0:00:00.799) 0:00:01.051 *** 2025-09-17 01:00:49.973150 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-17 01:00:49.973160 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-17 01:00:49.973170 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-17 01:00:49.973179 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-17 01:00:49.973189 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-17 01:00:49.973198 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-17 01:00:49.973208 | orchestrator | 2025-09-17 01:00:49.973217 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-17 01:00:49.973227 | orchestrator | 2025-09-17 01:00:49.973236 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-17 01:00:49.973246 | orchestrator | Wednesday 17 September 2025 00:56:22 +0000 (0:00:00.783) 0:00:01.834 *** 2025-09-17 01:00:49.973255 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 01:00:49.973265 | orchestrator | 2025-09-17 01:00:49.973274 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-17 01:00:49.973284 | orchestrator | Wednesday 17 September 2025 00:56:23 +0000 (0:00:01.056) 0:00:02.891 *** 2025-09-17 01:00:49.973293 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:00:49.973303 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:00:49.973312 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:00:49.973322 | orchestrator | ok: [testbed-node-3] 2025-09-17 01:00:49.973331 | orchestrator | ok: [testbed-node-4] 2025-09-17 01:00:49.973340 | orchestrator | ok: [testbed-node-5] 2025-09-17 01:00:49.973350 | orchestrator | 2025-09-17 01:00:49.973359 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-17 01:00:49.973369 | orchestrator | Wednesday 17 September 2025 00:56:24 +0000 (0:00:01.292) 0:00:04.184 *** 2025-09-17 01:00:49.973378 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:00:49.973387 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:00:49.973397 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:00:49.973406 | orchestrator | ok: [testbed-node-3] 2025-09-17 01:00:49.973421 | orchestrator | ok: [testbed-node-5] 2025-09-17 01:00:49.973431 | orchestrator | ok: [testbed-node-4] 2025-09-17 01:00:49.973440 | orchestrator | 2025-09-17 01:00:49.973450 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-17 01:00:49.973460 | orchestrator | Wednesday 17 September 2025 00:56:25 +0000 (0:00:01.097) 0:00:05.281 *** 2025-09-17 01:00:49.973469 | orchestrator | ok: [testbed-node-0] => { 2025-09-17 01:00:49.973479 | orchestrator |  "changed": false, 2025-09-17 01:00:49.973488 | orchestrator |  "msg": "All assertions passed" 2025-09-17 01:00:49.973498 | orchestrator | } 2025-09-17 01:00:49.973508 | orchestrator | ok: [testbed-node-1] => { 2025-09-17 01:00:49.973517 | orchestrator |  "changed": false, 2025-09-17 01:00:49.973527 | orchestrator |  "msg": "All assertions passed" 2025-09-17 01:00:49.973536 | orchestrator | } 2025-09-17 01:00:49.973546 | orchestrator | ok: [testbed-node-2] => { 2025-09-17 01:00:49.973555 | orchestrator |  "changed": false, 2025-09-17 01:00:49.973565 | orchestrator |  "msg": "All assertions passed" 2025-09-17 01:00:49.973574 | orchestrator | } 2025-09-17 01:00:49.973583 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 01:00:49.973593 | orchestrator |  "changed": false, 2025-09-17 01:00:49.973602 | orchestrator |  "msg": "All assertions passed" 2025-09-17 01:00:49.973612 | orchestrator | } 2025-09-17 01:00:49.973621 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 01:00:49.973631 | orchestrator |  "changed": false, 2025-09-17 01:00:49.973640 | orchestrator |  "msg": "All assertions passed" 2025-09-17 01:00:49.973650 | orchestrator | } 2025-09-17 01:00:49.973659 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 01:00:49.973668 | orchestrator |  "changed": false, 2025-09-17 01:00:49.973705 | orchestrator |  "msg": "All assertions passed" 2025-09-17 01:00:49.973716 | orchestrator | } 2025-09-17 01:00:49.973726 | orchestrator | 2025-09-17 01:00:49.973736 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-17 01:00:49.973746 | orchestrator | Wednesday 17 September 2025 00:56:26 +0000 (0:00:00.677) 0:00:05.958 *** 2025-09-17 01:00:49.973755 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.973765 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.973774 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.973784 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.973793 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.973803 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.973812 | orchestrator | 2025-09-17 01:00:49.973822 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-17 01:00:49.973831 | orchestrator | Wednesday 17 September 2025 00:56:26 +0000 (0:00:00.534) 0:00:06.493 *** 2025-09-17 01:00:49.973841 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-17 01:00:49.973851 | orchestrator | 2025-09-17 01:00:49.973860 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-17 01:00:49.973875 | orchestrator | Wednesday 17 September 2025 00:56:30 +0000 (0:00:03.481) 0:00:09.974 *** 2025-09-17 01:00:49.973885 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-17 01:00:49.973895 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-17 01:00:49.973904 | orchestrator | 2025-09-17 01:00:49.973914 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-17 01:00:49.973923 | orchestrator | Wednesday 17 September 2025 00:56:37 +0000 (0:00:07.098) 0:00:17.073 *** 2025-09-17 01:00:49.973996 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 01:00:49.974006 | orchestrator | 2025-09-17 01:00:49.974056 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-17 01:00:49.974067 | orchestrator | Wednesday 17 September 2025 00:56:40 +0000 (0:00:03.349) 0:00:20.423 *** 2025-09-17 01:00:49.974076 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 01:00:49.974086 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-17 01:00:49.974103 | orchestrator | 2025-09-17 01:00:49.974113 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-17 01:00:49.974123 | orchestrator | Wednesday 17 September 2025 00:56:44 +0000 (0:00:03.519) 0:00:23.943 *** 2025-09-17 01:00:49.974132 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 01:00:49.974142 | orchestrator | 2025-09-17 01:00:49.974152 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-17 01:00:49.974161 | orchestrator | Wednesday 17 September 2025 00:56:47 +0000 (0:00:03.265) 0:00:27.209 *** 2025-09-17 01:00:49.974171 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-17 01:00:49.974180 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-17 01:00:49.974190 | orchestrator | 2025-09-17 01:00:49.974199 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-17 01:00:49.974209 | orchestrator | Wednesday 17 September 2025 00:56:54 +0000 (0:00:06.867) 0:00:34.076 *** 2025-09-17 01:00:49.974219 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.974228 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.974238 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.974247 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.974257 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.974266 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.974276 | orchestrator | 2025-09-17 01:00:49.974285 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-17 01:00:49.974295 | orchestrator | Wednesday 17 September 2025 00:56:55 +0000 (0:00:00.788) 0:00:34.865 *** 2025-09-17 01:00:49.974305 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.974314 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.974324 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.974333 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.974343 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.974352 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.974362 | orchestrator | 2025-09-17 01:00:49.974371 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-17 01:00:49.974381 | orchestrator | Wednesday 17 September 2025 00:56:57 +0000 (0:00:02.327) 0:00:37.192 *** 2025-09-17 01:00:49.974391 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:00:49.974400 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:00:49.974410 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:00:49.974420 | orchestrator | ok: [testbed-node-3] 2025-09-17 01:00:49.974429 | orchestrator | ok: [testbed-node-4] 2025-09-17 01:00:49.974439 | orchestrator | ok: [testbed-node-5] 2025-09-17 01:00:49.974448 | orchestrator | 2025-09-17 01:00:49.974458 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-17 01:00:49.974468 | orchestrator | Wednesday 17 September 2025 00:56:58 +0000 (0:00:01.249) 0:00:38.442 *** 2025-09-17 01:00:49.974477 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.974487 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.974496 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.974506 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.974515 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.974525 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.974534 | orchestrator | 2025-09-17 01:00:49.974544 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-17 01:00:49.974554 | orchestrator | Wednesday 17 September 2025 00:57:01 +0000 (0:00:02.997) 0:00:41.440 *** 2025-09-17 01:00:49.974603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.974631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.974643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.974654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.974665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.974702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.974720 | orchestrator | 2025-09-17 01:00:49.974731 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-17 01:00:49.974741 | orchestrator | Wednesday 17 September 2025 00:57:04 +0000 (0:00:02.831) 0:00:44.271 *** 2025-09-17 01:00:49.974750 | orchestrator | [WARNING]: Skipped 2025-09-17 01:00:49.974765 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-17 01:00:49.974775 | orchestrator | due to this access issue: 2025-09-17 01:00:49.974784 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-17 01:00:49.974794 | orchestrator | a directory 2025-09-17 01:00:49.974804 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 01:00:49.974813 | orchestrator | 2025-09-17 01:00:49.974823 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-17 01:00:49.974832 | orchestrator | Wednesday 17 September 2025 00:57:05 +0000 (0:00:01.045) 0:00:45.316 *** 2025-09-17 01:00:49.974842 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 01:00:49.974852 | orchestrator | 2025-09-17 01:00:49.974861 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-17 01:00:49.974871 | orchestrator | Wednesday 17 September 2025 00:57:06 +0000 (0:00:01.157) 0:00:46.474 *** 2025-09-17 01:00:49.974881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.974892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.974902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.974968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.974982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.974992 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.975002 | orchestrator | 2025-09-17 01:00:49.975012 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-17 01:00:49.975022 | orchestrator | Wednesday 17 September 2025 00:57:11 +0000 (0:00:04.827) 0:00:51.301 *** 2025-09-17 01:00:49.975032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.975060 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.975096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.975108 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.975122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.975133 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.975143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.975153 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.975164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.975173 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.975183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.975200 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.975209 | orchestrator | 2025-09-17 01:00:49.975219 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-17 01:00:49.975228 | orchestrator | Wednesday 17 September 2025 00:57:15 +0000 (0:00:03.375) 0:00:54.677 *** 2025-09-17 01:00:49.975266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.975278 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.975292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.975303 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.975313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.975323 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.975332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.975348 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.975358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.975368 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.975384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.975395 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.975404 | orchestrator | 2025-09-17 01:00:49.975414 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-17 01:00:49.975423 | orchestrator | Wednesday 17 September 2025 00:57:18 +0000 (0:00:03.309) 0:00:57.986 *** 2025-09-17 01:00:49.975437 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.975447 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.975457 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.975466 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.975475 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.975485 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.975495 | orchestrator | 2025-09-17 01:00:49.975504 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-17 01:00:49.975514 | orchestrator | Wednesday 17 September 2025 00:57:21 +0000 (0:00:03.096) 0:01:01.082 *** 2025-09-17 01:00:49.975523 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.975533 | orchestrator | 2025-09-17 01:00:49.975542 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-17 01:00:49.975552 | orchestrator | Wednesday 17 September 2025 00:57:21 +0000 (0:00:00.125) 0:01:01.207 *** 2025-09-17 01:00:49.975561 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.975571 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.975580 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.975590 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.975600 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.975609 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.975618 | orchestrator | 2025-09-17 01:00:49.975628 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-17 01:00:49.975637 | orchestrator | Wednesday 17 September 2025 00:57:22 +0000 (0:00:00.529) 0:01:01.737 *** 2025-09-17 01:00:49.975647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.975663 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.975673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.975683 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.975702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.975712 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.975726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.975737 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.975747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.975762 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.975772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.975782 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.975792 | orchestrator | 2025-09-17 01:00:49.975802 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-17 01:00:49.975811 | orchestrator | Wednesday 17 September 2025 00:57:25 +0000 (0:00:03.229) 0:01:04.966 *** 2025-09-17 01:00:49.975821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.975837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.975852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.975868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.975878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.975889 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.975899 | orchestrator | 2025-09-17 01:00:49.975909 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-17 01:00:49.975918 | orchestrator | Wednesday 17 September 2025 00:57:29 +0000 (0:00:04.130) 0:01:09.096 *** 2025-09-17 01:00:49.975949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.975964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.975981 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.975991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.976001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.976016 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.976026 | orchestrator | 2025-09-17 01:00:49.976036 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-17 01:00:49.976050 | orchestrator | Wednesday 17 September 2025 00:57:36 +0000 (0:00:06.619) 0:01:15.716 *** 2025-09-17 01:00:49.976060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.976076 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.976086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.976096 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.976106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.976116 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.976130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.976140 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.976158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.976174 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.976185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.976195 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.976205 | orchestrator | 2025-09-17 01:00:49.976214 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-17 01:00:49.976224 | orchestrator | Wednesday 17 September 2025 00:57:39 +0000 (0:00:03.093) 0:01:18.810 *** 2025-09-17 01:00:49.976233 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.976243 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.976252 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:49.976262 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.976271 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:00:49.976281 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:00:49.976290 | orchestrator | 2025-09-17 01:00:49.976300 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-17 01:00:49.976309 | orchestrator | Wednesday 17 September 2025 00:57:42 +0000 (0:00:03.271) 0:01:22.081 *** 2025-09-17 01:00:49.976319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.976329 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.976339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.976349 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.976364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.976380 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.976395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.976405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.976416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.976426 | orchestrator | 2025-09-17 01:00:49.976436 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-17 01:00:49.976445 | orchestrator | Wednesday 17 September 2025 00:57:46 +0000 (0:00:04.130) 0:01:26.212 *** 2025-09-17 01:00:49.976455 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.976464 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.976474 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.976483 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.976493 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.976502 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.976517 | orchestrator | 2025-09-17 01:00:49.976527 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-17 01:00:49.976537 | orchestrator | Wednesday 17 September 2025 00:57:48 +0000 (0:00:02.411) 0:01:28.624 *** 2025-09-17 01:00:49.976546 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.976556 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.976565 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.976575 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.976585 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.976598 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.976608 | orchestrator | 2025-09-17 01:00:49.976618 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-17 01:00:49.976628 | orchestrator | Wednesday 17 September 2025 00:57:51 +0000 (0:00:02.510) 0:01:31.134 *** 2025-09-17 01:00:49.976637 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.976647 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.976656 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.976666 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.976675 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.976684 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.976694 | orchestrator | 2025-09-17 01:00:49.976704 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-17 01:00:49.976713 | orchestrator | Wednesday 17 September 2025 00:57:54 +0000 (0:00:02.587) 0:01:33.722 *** 2025-09-17 01:00:49.976723 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.976732 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.976741 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.976751 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.976760 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.976774 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.976783 | orchestrator | 2025-09-17 01:00:49.976793 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-17 01:00:49.976803 | orchestrator | Wednesday 17 September 2025 00:57:55 +0000 (0:00:01.903) 0:01:35.626 *** 2025-09-17 01:00:49.976812 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.976822 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.976831 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.976841 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.976850 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.976860 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.976869 | orchestrator | 2025-09-17 01:00:49.976879 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-17 01:00:49.976889 | orchestrator | Wednesday 17 September 2025 00:57:57 +0000 (0:00:01.888) 0:01:37.514 *** 2025-09-17 01:00:49.976898 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.976908 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.976917 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.976949 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.976959 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.976968 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.976978 | orchestrator | 2025-09-17 01:00:49.976987 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-17 01:00:49.976997 | orchestrator | Wednesday 17 September 2025 00:58:00 +0000 (0:00:02.664) 0:01:40.179 *** 2025-09-17 01:00:49.977006 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 01:00:49.977016 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.977025 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 01:00:49.977035 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.977044 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 01:00:49.977054 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.977069 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 01:00:49.977079 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.977088 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 01:00:49.977098 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.977107 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 01:00:49.977117 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.977126 | orchestrator | 2025-09-17 01:00:49.977136 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-17 01:00:49.977146 | orchestrator | Wednesday 17 September 2025 00:58:02 +0000 (0:00:01.925) 0:01:42.104 *** 2025-09-17 01:00:49.977156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.977166 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.977181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.977192 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.977206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.977217 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.977226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.977242 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.977252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.977262 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.977272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.977282 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.977291 | orchestrator | 2025-09-17 01:00:49.977301 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-17 01:00:49.977310 | orchestrator | Wednesday 17 September 2025 00:58:04 +0000 (0:00:01.642) 0:01:43.747 *** 2025-09-17 01:00:49.977325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.977341 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.977351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.977366 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.977376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.977386 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.977396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.977406 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.977421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.977431 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.977445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.977456 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.977470 | orchestrator | 2025-09-17 01:00:49.977479 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-17 01:00:49.977489 | orchestrator | Wednesday 17 September 2025 00:58:06 +0000 (0:00:02.302) 0:01:46.049 *** 2025-09-17 01:00:49.977619 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.977631 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.977641 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.977650 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.977659 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.977669 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.977678 | orchestrator | 2025-09-17 01:00:49.977688 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-17 01:00:49.977697 | orchestrator | Wednesday 17 September 2025 00:58:08 +0000 (0:00:01.613) 0:01:47.663 *** 2025-09-17 01:00:49.977707 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.977716 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.977726 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.977735 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:00:49.977745 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:00:49.977754 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:00:49.977764 | orchestrator | 2025-09-17 01:00:49.977773 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-17 01:00:49.977783 | orchestrator | Wednesday 17 September 2025 00:58:11 +0000 (0:00:03.064) 0:01:50.727 *** 2025-09-17 01:00:49.977792 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.977801 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.977811 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.977820 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.977830 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.977839 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.977849 | orchestrator | 2025-09-17 01:00:49.977858 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-17 01:00:49.977868 | orchestrator | Wednesday 17 September 2025 00:58:12 +0000 (0:00:01.872) 0:01:52.600 *** 2025-09-17 01:00:49.977877 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.977887 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.977896 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.977906 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.977915 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.977924 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.977981 | orchestrator | 2025-09-17 01:00:49.977991 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-17 01:00:49.978001 | orchestrator | Wednesday 17 September 2025 00:58:15 +0000 (0:00:02.831) 0:01:55.432 *** 2025-09-17 01:00:49.978010 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.978053 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.978063 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.978072 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.978082 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.978091 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.978101 | orchestrator | 2025-09-17 01:00:49.978110 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-17 01:00:49.978120 | orchestrator | Wednesday 17 September 2025 00:58:18 +0000 (0:00:02.510) 0:01:57.943 *** 2025-09-17 01:00:49.978129 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.978139 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.978148 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.978158 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.978167 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.978177 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.978186 | orchestrator | 2025-09-17 01:00:49.978196 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-17 01:00:49.978205 | orchestrator | Wednesday 17 September 2025 00:58:20 +0000 (0:00:02.052) 0:01:59.995 *** 2025-09-17 01:00:49.978215 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.978224 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.978240 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.978250 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.978259 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.978269 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.978278 | orchestrator | 2025-09-17 01:00:49.978288 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-17 01:00:49.978298 | orchestrator | Wednesday 17 September 2025 00:58:22 +0000 (0:00:01.692) 0:02:01.688 *** 2025-09-17 01:00:49.978307 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.978317 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.978326 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.978336 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.978345 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.978355 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.978364 | orchestrator | 2025-09-17 01:00:49.978374 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-17 01:00:49.978390 | orchestrator | Wednesday 17 September 2025 00:58:23 +0000 (0:00:01.837) 0:02:03.526 *** 2025-09-17 01:00:49.978398 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.978406 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.978414 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.978421 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.978429 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.978437 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.978445 | orchestrator | 2025-09-17 01:00:49.978452 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-17 01:00:49.978460 | orchestrator | Wednesday 17 September 2025 00:58:25 +0000 (0:00:02.042) 0:02:05.568 *** 2025-09-17 01:00:49.978468 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 01:00:49.978476 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.978484 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 01:00:49.978492 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.978504 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 01:00:49.978513 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.978520 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 01:00:49.978528 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.978536 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 01:00:49.978544 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.978552 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 01:00:49.978559 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.978567 | orchestrator | 2025-09-17 01:00:49.978575 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-17 01:00:49.978583 | orchestrator | Wednesday 17 September 2025 00:58:29 +0000 (0:00:03.261) 0:02:08.830 *** 2025-09-17 01:00:49.978591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.978607 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.978616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.978624 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.978636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 01:00:49.978645 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.978657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.978665 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.978674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.978682 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.978690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 01:00:49.978703 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.978711 | orchestrator | 2025-09-17 01:00:49.978719 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-17 01:00:49.978726 | orchestrator | Wednesday 17 September 2025 00:58:30 +0000 (0:00:01.685) 0:02:10.515 *** 2025-09-17 01:00:49.978735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.978749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.978762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 01:00:49.978770 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.978833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.978842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 01:00:49.978850 | orchestrator | 2025-09-17 01:00:49.978859 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-17 01:00:49.978867 | orchestrator | Wednesday 17 September 2025 00:58:34 +0000 (0:00:03.267) 0:02:13.783 *** 2025-09-17 01:00:49.978874 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:49.978882 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:49.978890 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:49.978898 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:00:49.978905 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:00:49.978913 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:00:49.978921 | orchestrator | 2025-09-17 01:00:49.978943 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-17 01:00:49.978952 | orchestrator | Wednesday 17 September 2025 00:58:34 +0000 (0:00:00.565) 0:02:14.348 *** 2025-09-17 01:00:49.978964 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:49.978972 | orchestrator | 2025-09-17 01:00:49.978980 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-17 01:00:49.978988 | orchestrator | Wednesday 17 September 2025 00:58:36 +0000 (0:00:02.288) 0:02:16.637 *** 2025-09-17 01:00:49.978996 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:49.979003 | orchestrator | 2025-09-17 01:00:49.979011 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-17 01:00:49.979019 | orchestrator | Wednesday 17 September 2025 00:58:39 +0000 (0:00:02.322) 0:02:18.960 *** 2025-09-17 01:00:49.979027 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:49.979035 | orchestrator | 2025-09-17 01:00:49.979042 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 01:00:49.979050 | orchestrator | Wednesday 17 September 2025 00:59:23 +0000 (0:00:44.338) 0:03:03.298 *** 2025-09-17 01:00:49.979058 | orchestrator | 2025-09-17 01:00:49.979066 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 01:00:49.979074 | orchestrator | Wednesday 17 September 2025 00:59:23 +0000 (0:00:00.059) 0:03:03.357 *** 2025-09-17 01:00:49.979081 | orchestrator | 2025-09-17 01:00:49.979093 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 01:00:49.979102 | orchestrator | Wednesday 17 September 2025 00:59:23 +0000 (0:00:00.168) 0:03:03.526 *** 2025-09-17 01:00:49.979115 | orchestrator | 2025-09-17 01:00:49.979123 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 01:00:49.979131 | orchestrator | Wednesday 17 September 2025 00:59:23 +0000 (0:00:00.071) 0:03:03.597 *** 2025-09-17 01:00:49.979139 | orchestrator | 2025-09-17 01:00:49.979147 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 01:00:49.979155 | orchestrator | Wednesday 17 September 2025 00:59:24 +0000 (0:00:00.064) 0:03:03.662 *** 2025-09-17 01:00:49.979162 | orchestrator | 2025-09-17 01:00:49.979170 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 01:00:49.979178 | orchestrator | Wednesday 17 September 2025 00:59:24 +0000 (0:00:00.061) 0:03:03.724 *** 2025-09-17 01:00:49.979186 | orchestrator | 2025-09-17 01:00:49.979194 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-17 01:00:49.979202 | orchestrator | Wednesday 17 September 2025 00:59:24 +0000 (0:00:00.063) 0:03:03.787 *** 2025-09-17 01:00:49.979209 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:49.979217 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:00:49.979225 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:00:49.979233 | orchestrator | 2025-09-17 01:00:49.979241 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-17 01:00:49.979248 | orchestrator | Wednesday 17 September 2025 00:59:52 +0000 (0:00:28.522) 0:03:32.310 *** 2025-09-17 01:00:49.979256 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:00:49.979264 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:00:49.979272 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:00:49.979280 | orchestrator | 2025-09-17 01:00:49.979288 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 01:00:49.979296 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 01:00:49.979305 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-17 01:00:49.979313 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-17 01:00:49.979321 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 01:00:49.979329 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 01:00:49.979337 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 01:00:49.979345 | orchestrator | 2025-09-17 01:00:49.979353 | orchestrator | 2025-09-17 01:00:49.979361 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 01:00:49.979369 | orchestrator | Wednesday 17 September 2025 01:00:46 +0000 (0:00:54.155) 0:04:26.466 *** 2025-09-17 01:00:49.979377 | orchestrator | =============================================================================== 2025-09-17 01:00:49.979385 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 54.16s 2025-09-17 01:00:49.979393 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.34s 2025-09-17 01:00:49.979400 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.52s 2025-09-17 01:00:49.979408 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.10s 2025-09-17 01:00:49.979416 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 6.87s 2025-09-17 01:00:49.979424 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.62s 2025-09-17 01:00:49.979431 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.83s 2025-09-17 01:00:49.979444 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.13s 2025-09-17 01:00:49.979452 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.13s 2025-09-17 01:00:49.979460 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.52s 2025-09-17 01:00:49.979471 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.48s 2025-09-17 01:00:49.979480 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.38s 2025-09-17 01:00:49.979488 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.35s 2025-09-17 01:00:49.979496 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.31s 2025-09-17 01:00:49.979504 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.27s 2025-09-17 01:00:49.979511 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.27s 2025-09-17 01:00:49.979519 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.27s 2025-09-17 01:00:49.979527 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.26s 2025-09-17 01:00:49.979534 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.23s 2025-09-17 01:00:49.979542 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.10s 2025-09-17 01:00:49.979554 | orchestrator | 2025-09-17 01:00:49 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:00:49.979562 | orchestrator | 2025-09-17 01:00:49 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state STARTED 2025-09-17 01:00:49.979570 | orchestrator | 2025-09-17 01:00:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:53.024071 | orchestrator | 2025-09-17 01:00:53 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:53.025206 | orchestrator | 2025-09-17 01:00:53 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:53.027984 | orchestrator | 2025-09-17 01:00:53 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:00:53.029553 | orchestrator | 2025-09-17 01:00:53 | INFO  | Task 019df03b-35ea-4252-9e42-f60ed1bfa9f5 is in state SUCCESS 2025-09-17 01:00:53.031126 | orchestrator | 2025-09-17 01:00:53.031163 | orchestrator | 2025-09-17 01:00:53.031176 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 01:00:53.031188 | orchestrator | 2025-09-17 01:00:53.031200 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 01:00:53.031212 | orchestrator | Wednesday 17 September 2025 00:59:03 +0000 (0:00:00.241) 0:00:00.241 *** 2025-09-17 01:00:53.031224 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:00:53.031237 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:00:53.031248 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:00:53.031516 | orchestrator | 2025-09-17 01:00:53.031528 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 01:00:53.031539 | orchestrator | Wednesday 17 September 2025 00:59:03 +0000 (0:00:00.278) 0:00:00.519 *** 2025-09-17 01:00:53.031550 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-17 01:00:53.031561 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-17 01:00:53.031572 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-17 01:00:53.031583 | orchestrator | 2025-09-17 01:00:53.031594 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-17 01:00:53.031604 | orchestrator | 2025-09-17 01:00:53.031615 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-17 01:00:53.031626 | orchestrator | Wednesday 17 September 2025 00:59:04 +0000 (0:00:00.394) 0:00:00.914 *** 2025-09-17 01:00:53.031636 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:00:53.031674 | orchestrator | 2025-09-17 01:00:53.031686 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-17 01:00:53.031698 | orchestrator | Wednesday 17 September 2025 00:59:04 +0000 (0:00:00.523) 0:00:01.438 *** 2025-09-17 01:00:53.031710 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-17 01:00:53.031721 | orchestrator | 2025-09-17 01:00:53.031733 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-17 01:00:53.031744 | orchestrator | Wednesday 17 September 2025 00:59:08 +0000 (0:00:03.741) 0:00:05.180 *** 2025-09-17 01:00:53.031756 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-17 01:00:53.031768 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-17 01:00:53.031779 | orchestrator | 2025-09-17 01:00:53.031812 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-17 01:00:53.031824 | orchestrator | Wednesday 17 September 2025 00:59:15 +0000 (0:00:06.986) 0:00:12.166 *** 2025-09-17 01:00:53.031835 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 01:00:53.031846 | orchestrator | 2025-09-17 01:00:53.031857 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-17 01:00:53.031868 | orchestrator | Wednesday 17 September 2025 00:59:18 +0000 (0:00:03.414) 0:00:15.580 *** 2025-09-17 01:00:53.031879 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 01:00:53.031890 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-17 01:00:53.031901 | orchestrator | 2025-09-17 01:00:53.031912 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-17 01:00:53.031923 | orchestrator | Wednesday 17 September 2025 00:59:23 +0000 (0:00:04.287) 0:00:19.867 *** 2025-09-17 01:00:53.031961 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 01:00:53.031973 | orchestrator | 2025-09-17 01:00:53.031984 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-17 01:00:53.031995 | orchestrator | Wednesday 17 September 2025 00:59:26 +0000 (0:00:03.482) 0:00:23.349 *** 2025-09-17 01:00:53.032006 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-17 01:00:53.032016 | orchestrator | 2025-09-17 01:00:53.032027 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-17 01:00:53.032039 | orchestrator | Wednesday 17 September 2025 00:59:30 +0000 (0:00:04.168) 0:00:27.518 *** 2025-09-17 01:00:53.032050 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:53.032061 | orchestrator | 2025-09-17 01:00:53.032071 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-17 01:00:53.032083 | orchestrator | Wednesday 17 September 2025 00:59:34 +0000 (0:00:03.413) 0:00:30.931 *** 2025-09-17 01:00:53.032094 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:53.032105 | orchestrator | 2025-09-17 01:00:53.032116 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-17 01:00:53.032127 | orchestrator | Wednesday 17 September 2025 00:59:38 +0000 (0:00:04.017) 0:00:34.949 *** 2025-09-17 01:00:53.032138 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:53.032149 | orchestrator | 2025-09-17 01:00:53.032172 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-17 01:00:53.032184 | orchestrator | Wednesday 17 September 2025 00:59:42 +0000 (0:00:03.905) 0:00:38.854 *** 2025-09-17 01:00:53.032212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.032237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.032249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.032262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.032279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.032299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.032318 | orchestrator | 2025-09-17 01:00:53.032329 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-17 01:00:53.032340 | orchestrator | Wednesday 17 September 2025 00:59:44 +0000 (0:00:02.295) 0:00:41.150 *** 2025-09-17 01:00:53.032352 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:53.032362 | orchestrator | 2025-09-17 01:00:53.032373 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-17 01:00:53.032384 | orchestrator | Wednesday 17 September 2025 00:59:44 +0000 (0:00:00.121) 0:00:41.272 *** 2025-09-17 01:00:53.032395 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:53.032406 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:53.032417 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:53.032427 | orchestrator | 2025-09-17 01:00:53.032438 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-17 01:00:53.032449 | orchestrator | Wednesday 17 September 2025 00:59:45 +0000 (0:00:00.656) 0:00:41.928 *** 2025-09-17 01:00:53.032460 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 01:00:53.032471 | orchestrator | 2025-09-17 01:00:53.032481 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-17 01:00:53.032492 | orchestrator | Wednesday 17 September 2025 00:59:46 +0000 (0:00:01.539) 0:00:43.468 *** 2025-09-17 01:00:53.032504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.032516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.032528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.032554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.032567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.032579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.032590 | orchestrator | 2025-09-17 01:00:53.032601 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-17 01:00:53.032612 | orchestrator | Wednesday 17 September 2025 00:59:50 +0000 (0:00:03.135) 0:00:46.603 *** 2025-09-17 01:00:53.032623 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:00:53.032634 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:00:53.032645 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:00:53.032656 | orchestrator | 2025-09-17 01:00:53.032667 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-17 01:00:53.032678 | orchestrator | Wednesday 17 September 2025 00:59:50 +0000 (0:00:00.775) 0:00:47.379 *** 2025-09-17 01:00:53.032720 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:00:53.032732 | orchestrator | 2025-09-17 01:00:53.032743 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-17 01:00:53.032754 | orchestrator | Wednesday 17 September 2025 00:59:52 +0000 (0:00:01.301) 0:00:48.681 *** 2025-09-17 01:00:53.032770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.032795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.032807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.032819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.032830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.032842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.032859 | orchestrator | 2025-09-17 01:00:53.032874 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-17 01:00:53.032885 | orchestrator | Wednesday 17 September 2025 00:59:54 +0000 (0:00:02.771) 0:00:51.452 *** 2025-09-17 01:00:53.032904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 01:00:53.032916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:00:53.032959 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:53.032972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 01:00:53.032984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:00:53.033002 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:53.033018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 01:00:53.033037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:00:53.033049 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:53.033060 | orchestrator | 2025-09-17 01:00:53.033071 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-17 01:00:53.033082 | orchestrator | Wednesday 17 September 2025 00:59:55 +0000 (0:00:00.739) 0:00:52.191 *** 2025-09-17 01:00:53.033093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 01:00:53.033105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:00:53.033130 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:53.033141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 01:00:53.033158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:00:53.033169 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:53.033187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 01:00:53.033199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:00:53.033211 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:53.033222 | orchestrator | 2025-09-17 01:00:53.033233 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-17 01:00:53.033244 | orchestrator | Wednesday 17 September 2025 00:59:56 +0000 (0:00:01.077) 0:00:53.269 *** 2025-09-17 01:00:53.033255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.033278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.033296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.033309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.033320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.033338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.033349 | orchestrator | 2025-09-17 01:00:53.033361 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-17 01:00:53.033372 | orchestrator | Wednesday 17 September 2025 00:59:59 +0000 (0:00:02.683) 0:00:55.952 *** 2025-09-17 01:00:53.033388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.033406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.033418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.033430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.033448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.033459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.033470 | orchestrator | 2025-09-17 01:00:53.033486 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-17 01:00:53.033498 | orchestrator | Wednesday 17 September 2025 01:00:04 +0000 (0:00:04.851) 0:01:00.803 *** 2025-09-17 01:00:53.033515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 01:00:53.033527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:00:53.033539 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:53.033550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 01:00:53.033568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:00:53.033579 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:53.033595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 01:00:53.033612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:00:53.033624 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:53.033635 | orchestrator | 2025-09-17 01:00:53.033646 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-17 01:00:53.033657 | orchestrator | Wednesday 17 September 2025 01:00:04 +0000 (0:00:00.696) 0:01:01.499 *** 2025-09-17 01:00:53.033668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.033686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.033697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 01:00:53.033714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.033732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.033745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:00:53.033763 | orchestrator | 2025-09-17 01:00:53.033774 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-17 01:00:53.033785 | orchestrator | Wednesday 17 September 2025 01:00:07 +0000 (0:00:02.629) 0:01:04.128 *** 2025-09-17 01:00:53.033796 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:00:53.033807 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:00:53.033818 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:00:53.033828 | orchestrator | 2025-09-17 01:00:53.033839 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-17 01:00:53.033850 | orchestrator | Wednesday 17 September 2025 01:00:07 +0000 (0:00:00.348) 0:01:04.477 *** 2025-09-17 01:00:53.033861 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:53.033872 | orchestrator | 2025-09-17 01:00:53.033882 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-17 01:00:53.033893 | orchestrator | Wednesday 17 September 2025 01:00:09 +0000 (0:00:01.977) 0:01:06.454 *** 2025-09-17 01:00:53.033904 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:53.033915 | orchestrator | 2025-09-17 01:00:53.033979 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-17 01:00:53.033993 | orchestrator | Wednesday 17 September 2025 01:00:12 +0000 (0:00:02.270) 0:01:08.725 *** 2025-09-17 01:00:53.034004 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:53.034015 | orchestrator | 2025-09-17 01:00:53.034078 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-17 01:00:53.034089 | orchestrator | Wednesday 17 September 2025 01:00:27 +0000 (0:00:15.786) 0:01:24.512 *** 2025-09-17 01:00:53.034100 | orchestrator | 2025-09-17 01:00:53.034111 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-17 01:00:53.034122 | orchestrator | Wednesday 17 September 2025 01:00:27 +0000 (0:00:00.059) 0:01:24.572 *** 2025-09-17 01:00:53.034133 | orchestrator | 2025-09-17 01:00:53.034190 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-17 01:00:53.034203 | orchestrator | Wednesday 17 September 2025 01:00:28 +0000 (0:00:00.059) 0:01:24.632 *** 2025-09-17 01:00:53.034214 | orchestrator | 2025-09-17 01:00:53.034225 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-17 01:00:53.034236 | orchestrator | Wednesday 17 September 2025 01:00:28 +0000 (0:00:00.062) 0:01:24.694 *** 2025-09-17 01:00:53.034246 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:53.034257 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:00:53.034268 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:00:53.034279 | orchestrator | 2025-09-17 01:00:53.034289 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-17 01:00:53.034300 | orchestrator | Wednesday 17 September 2025 01:00:40 +0000 (0:00:12.269) 0:01:36.964 *** 2025-09-17 01:00:53.034311 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:00:53.034321 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:00:53.034332 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:00:53.034342 | orchestrator | 2025-09-17 01:00:53.034352 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 01:00:53.034367 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 01:00:53.034378 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 01:00:53.034396 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 01:00:53.034406 | orchestrator | 2025-09-17 01:00:53.034416 | orchestrator | 2025-09-17 01:00:53.034426 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 01:00:53.034435 | orchestrator | Wednesday 17 September 2025 01:00:52 +0000 (0:00:11.771) 0:01:48.735 *** 2025-09-17 01:00:53.034445 | orchestrator | =============================================================================== 2025-09-17 01:00:53.034455 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.79s 2025-09-17 01:00:53.034472 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.27s 2025-09-17 01:00:53.034483 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.77s 2025-09-17 01:00:53.034492 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.99s 2025-09-17 01:00:53.034502 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.85s 2025-09-17 01:00:53.034511 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.29s 2025-09-17 01:00:53.034521 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.17s 2025-09-17 01:00:53.034531 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.02s 2025-09-17 01:00:53.034540 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.91s 2025-09-17 01:00:53.034550 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.74s 2025-09-17 01:00:53.034559 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.48s 2025-09-17 01:00:53.034569 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.41s 2025-09-17 01:00:53.034579 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.41s 2025-09-17 01:00:53.034588 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.14s 2025-09-17 01:00:53.034598 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.77s 2025-09-17 01:00:53.034607 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.68s 2025-09-17 01:00:53.034617 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.63s 2025-09-17 01:00:53.034626 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.30s 2025-09-17 01:00:53.034636 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.27s 2025-09-17 01:00:53.034645 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.98s 2025-09-17 01:00:53.034655 | orchestrator | 2025-09-17 01:00:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:56.068588 | orchestrator | 2025-09-17 01:00:56 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:56.069060 | orchestrator | 2025-09-17 01:00:56 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:56.071863 | orchestrator | 2025-09-17 01:00:56 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:00:56.073298 | orchestrator | 2025-09-17 01:00:56 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:00:56.073390 | orchestrator | 2025-09-17 01:00:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:00:59.126768 | orchestrator | 2025-09-17 01:00:59 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:00:59.126870 | orchestrator | 2025-09-17 01:00:59 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:00:59.126884 | orchestrator | 2025-09-17 01:00:59 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:00:59.126896 | orchestrator | 2025-09-17 01:00:59 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:00:59.126979 | orchestrator | 2025-09-17 01:00:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:02.134547 | orchestrator | 2025-09-17 01:01:02 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:02.135440 | orchestrator | 2025-09-17 01:01:02 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:02.136004 | orchestrator | 2025-09-17 01:01:02 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:02.137019 | orchestrator | 2025-09-17 01:01:02 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:02.137051 | orchestrator | 2025-09-17 01:01:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:05.173591 | orchestrator | 2025-09-17 01:01:05 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:05.174248 | orchestrator | 2025-09-17 01:01:05 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:05.174871 | orchestrator | 2025-09-17 01:01:05 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:05.175774 | orchestrator | 2025-09-17 01:01:05 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:05.175880 | orchestrator | 2025-09-17 01:01:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:08.211852 | orchestrator | 2025-09-17 01:01:08 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:08.215279 | orchestrator | 2025-09-17 01:01:08 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:08.223150 | orchestrator | 2025-09-17 01:01:08 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:08.225457 | orchestrator | 2025-09-17 01:01:08 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:08.225775 | orchestrator | 2025-09-17 01:01:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:11.272435 | orchestrator | 2025-09-17 01:01:11 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:11.272698 | orchestrator | 2025-09-17 01:01:11 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:11.274656 | orchestrator | 2025-09-17 01:01:11 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:11.275504 | orchestrator | 2025-09-17 01:01:11 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:11.275529 | orchestrator | 2025-09-17 01:01:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:14.311083 | orchestrator | 2025-09-17 01:01:14 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:14.312455 | orchestrator | 2025-09-17 01:01:14 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:14.313968 | orchestrator | 2025-09-17 01:01:14 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:14.315509 | orchestrator | 2025-09-17 01:01:14 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:14.315550 | orchestrator | 2025-09-17 01:01:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:17.347178 | orchestrator | 2025-09-17 01:01:17 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:17.347395 | orchestrator | 2025-09-17 01:01:17 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:17.348516 | orchestrator | 2025-09-17 01:01:17 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:17.349159 | orchestrator | 2025-09-17 01:01:17 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:17.349253 | orchestrator | 2025-09-17 01:01:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:20.377668 | orchestrator | 2025-09-17 01:01:20 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:20.378116 | orchestrator | 2025-09-17 01:01:20 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:20.379089 | orchestrator | 2025-09-17 01:01:20 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:20.379808 | orchestrator | 2025-09-17 01:01:20 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:20.379834 | orchestrator | 2025-09-17 01:01:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:23.429267 | orchestrator | 2025-09-17 01:01:23 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:23.430189 | orchestrator | 2025-09-17 01:01:23 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:23.431511 | orchestrator | 2025-09-17 01:01:23 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:23.432859 | orchestrator | 2025-09-17 01:01:23 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:23.432896 | orchestrator | 2025-09-17 01:01:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:26.474347 | orchestrator | 2025-09-17 01:01:26 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:26.474894 | orchestrator | 2025-09-17 01:01:26 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:26.476265 | orchestrator | 2025-09-17 01:01:26 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:26.477240 | orchestrator | 2025-09-17 01:01:26 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:26.477700 | orchestrator | 2025-09-17 01:01:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:29.506132 | orchestrator | 2025-09-17 01:01:29 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:29.506235 | orchestrator | 2025-09-17 01:01:29 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:29.506780 | orchestrator | 2025-09-17 01:01:29 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:29.507467 | orchestrator | 2025-09-17 01:01:29 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:29.507603 | orchestrator | 2025-09-17 01:01:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:32.534219 | orchestrator | 2025-09-17 01:01:32 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:32.534337 | orchestrator | 2025-09-17 01:01:32 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:32.534491 | orchestrator | 2025-09-17 01:01:32 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:32.534903 | orchestrator | 2025-09-17 01:01:32 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:32.534952 | orchestrator | 2025-09-17 01:01:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:35.622719 | orchestrator | 2025-09-17 01:01:35 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:35.622814 | orchestrator | 2025-09-17 01:01:35 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:35.623815 | orchestrator | 2025-09-17 01:01:35 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:35.624289 | orchestrator | 2025-09-17 01:01:35 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:35.624310 | orchestrator | 2025-09-17 01:01:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:38.669281 | orchestrator | 2025-09-17 01:01:38 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:38.671080 | orchestrator | 2025-09-17 01:01:38 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:38.673101 | orchestrator | 2025-09-17 01:01:38 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:38.674882 | orchestrator | 2025-09-17 01:01:38 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:38.675123 | orchestrator | 2025-09-17 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:41.704088 | orchestrator | 2025-09-17 01:01:41 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:41.704316 | orchestrator | 2025-09-17 01:01:41 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:41.705299 | orchestrator | 2025-09-17 01:01:41 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:41.707440 | orchestrator | 2025-09-17 01:01:41 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:41.707570 | orchestrator | 2025-09-17 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:44.752146 | orchestrator | 2025-09-17 01:01:44 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:44.755050 | orchestrator | 2025-09-17 01:01:44 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:44.755141 | orchestrator | 2025-09-17 01:01:44 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:44.756878 | orchestrator | 2025-09-17 01:01:44 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:44.757525 | orchestrator | 2025-09-17 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:47.796489 | orchestrator | 2025-09-17 01:01:47 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:47.799407 | orchestrator | 2025-09-17 01:01:47 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:47.801782 | orchestrator | 2025-09-17 01:01:47 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:47.803905 | orchestrator | 2025-09-17 01:01:47 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:47.804521 | orchestrator | 2025-09-17 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:50.850635 | orchestrator | 2025-09-17 01:01:50 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:50.851658 | orchestrator | 2025-09-17 01:01:50 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:50.852894 | orchestrator | 2025-09-17 01:01:50 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:50.854642 | orchestrator | 2025-09-17 01:01:50 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:50.854711 | orchestrator | 2025-09-17 01:01:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:53.886402 | orchestrator | 2025-09-17 01:01:53 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:53.886696 | orchestrator | 2025-09-17 01:01:53 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:53.888234 | orchestrator | 2025-09-17 01:01:53 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:53.888906 | orchestrator | 2025-09-17 01:01:53 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:53.890581 | orchestrator | 2025-09-17 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:01:56.912031 | orchestrator | 2025-09-17 01:01:56 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:01:56.912263 | orchestrator | 2025-09-17 01:01:56 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:01:56.912815 | orchestrator | 2025-09-17 01:01:56 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:01:56.913584 | orchestrator | 2025-09-17 01:01:56 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:01:56.913607 | orchestrator | 2025-09-17 01:01:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:00.003898 | orchestrator | 2025-09-17 01:01:59 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:00.004072 | orchestrator | 2025-09-17 01:01:59 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:02:00.004088 | orchestrator | 2025-09-17 01:01:59 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:00.004099 | orchestrator | 2025-09-17 01:01:59 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:00.004111 | orchestrator | 2025-09-17 01:01:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:02.975445 | orchestrator | 2025-09-17 01:02:02 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:02.976997 | orchestrator | 2025-09-17 01:02:02 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:02:02.977435 | orchestrator | 2025-09-17 01:02:02 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:02.978655 | orchestrator | 2025-09-17 01:02:02 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:02.978687 | orchestrator | 2025-09-17 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:06.008519 | orchestrator | 2025-09-17 01:02:06 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:06.011466 | orchestrator | 2025-09-17 01:02:06 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state STARTED 2025-09-17 01:02:06.013231 | orchestrator | 2025-09-17 01:02:06 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:06.014769 | orchestrator | 2025-09-17 01:02:06 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:06.014883 | orchestrator | 2025-09-17 01:02:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:09.052394 | orchestrator | 2025-09-17 01:02:09 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:09.053503 | orchestrator | 2025-09-17 01:02:09 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:09.058368 | orchestrator | 2025-09-17 01:02:09 | INFO  | Task 725fb016-32a4-4033-be4c-31dda4fcc965 is in state SUCCESS 2025-09-17 01:02:09.060165 | orchestrator | 2025-09-17 01:02:09.060197 | orchestrator | 2025-09-17 01:02:09.060209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 01:02:09.060221 | orchestrator | 2025-09-17 01:02:09.060232 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 01:02:09.060243 | orchestrator | Wednesday 17 September 2025 00:59:18 +0000 (0:00:00.256) 0:00:00.256 *** 2025-09-17 01:02:09.060254 | orchestrator | ok: [testbed-manager] 2025-09-17 01:02:09.060267 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:02:09.060277 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:02:09.060288 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:02:09.060299 | orchestrator | ok: [testbed-node-3] 2025-09-17 01:02:09.060310 | orchestrator | ok: [testbed-node-4] 2025-09-17 01:02:09.060320 | orchestrator | ok: [testbed-node-5] 2025-09-17 01:02:09.060331 | orchestrator | 2025-09-17 01:02:09.060342 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 01:02:09.060352 | orchestrator | Wednesday 17 September 2025 00:59:19 +0000 (0:00:00.687) 0:00:00.943 *** 2025-09-17 01:02:09.060364 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-17 01:02:09.060375 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-17 01:02:09.060385 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-17 01:02:09.060396 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-17 01:02:09.060407 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-17 01:02:09.060417 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-17 01:02:09.060428 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-17 01:02:09.060578 | orchestrator | 2025-09-17 01:02:09.060591 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-17 01:02:09.060603 | orchestrator | 2025-09-17 01:02:09.060615 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-17 01:02:09.060626 | orchestrator | Wednesday 17 September 2025 00:59:19 +0000 (0:00:00.614) 0:00:01.557 *** 2025-09-17 01:02:09.060638 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 01:02:09.060652 | orchestrator | 2025-09-17 01:02:09.060663 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-17 01:02:09.060675 | orchestrator | Wednesday 17 September 2025 00:59:21 +0000 (0:00:01.252) 0:00:02.810 *** 2025-09-17 01:02:09.060690 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 01:02:09.060706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.060734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.060755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.060808 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.060824 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.060837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.060853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.060869 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.060885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.060906 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.060920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.061326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.061349 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 01:02:09.061365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.061379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.061390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.061411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.061427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.062188 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.062282 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.062298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.062310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.062324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.062367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.062380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.062405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.062770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063184 | orchestrator | 2025-09-17 01:02:09.063198 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-17 01:02:09.063209 | orchestrator | Wednesday 17 September 2025 00:59:23 +0000 (0:00:02.917) 0:00:05.727 *** 2025-09-17 01:02:09.063222 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 01:02:09.063234 | orchestrator | 2025-09-17 01:02:09.063245 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-17 01:02:09.063256 | orchestrator | Wednesday 17 September 2025 00:59:25 +0000 (0:00:01.249) 0:00:06.977 *** 2025-09-17 01:02:09.063267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.063292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.063304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.063315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.063424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 01:02:09.063443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.063455 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.063467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063479 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.063499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.063510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.063632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.063644 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.063655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.063685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.063708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.063770 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 01:02:09.063785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.063803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.063814 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.063837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.063907 | orchestrator | 2025-09-17 01:02:09.063918 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-17 01:02:09.063963 | orchestrator | Wednesday 17 September 2025 00:59:31 +0000 (0:00:06.333) 0:00:13.310 *** 2025-09-17 01:02:09.063975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.063994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064040 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.064092 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-17 01:02:09.064107 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.064119 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064138 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-17 01:02:09.064150 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.064173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064275 | orchestrator | skipping: [testbed-manager] 2025-09-17 01:02:09.064289 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.064303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.064318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064373 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.064422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.064445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064471 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.064485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.064498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064526 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.064540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.064559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064624 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.064636 | orchestrator | 2025-09-17 01:02:09.064647 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-17 01:02:09.064657 | orchestrator | Wednesday 17 September 2025 00:59:32 +0000 (0:00:01.376) 0:00:14.687 *** 2025-09-17 01:02:09.064669 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-17 01:02:09.064680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.064691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.064702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064714 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064778 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-17 01:02:09.064793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064804 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064826 | orchestrator | skipping: [testbed-manager] 2025-09-17 01:02:09.064837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.064848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.064946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.064958 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.064969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.064980 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.064991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.065002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.065014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.065025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 01:02:09.065043 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.065089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.065103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.065115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.065126 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.065137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.065149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.065160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.065171 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.065182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 01:02:09.065200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.065245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 01:02:09.065258 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.065270 | orchestrator | 2025-09-17 01:02:09.065281 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-17 01:02:09.065291 | orchestrator | Wednesday 17 September 2025 00:59:34 +0000 (0:00:01.811) 0:00:16.498 *** 2025-09-17 01:02:09.065303 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 01:02:09.065314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.065325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.065336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.065354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.065365 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.065409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.065423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.065435 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.065446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.065457 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.065469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.065491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.065503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.065548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.065562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.065573 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 01:02:09.065586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.065597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.065615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.065626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.065672 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.065686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.065697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.065708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.065720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.065738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.065749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.065760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.065771 | orchestrator | 2025-09-17 01:02:09.065782 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-17 01:02:09.065798 | orchestrator | Wednesday 17 September 2025 00:59:40 +0000 (0:00:05.686) 0:00:22.185 *** 2025-09-17 01:02:09.065809 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 01:02:09.065820 | orchestrator | 2025-09-17 01:02:09.065831 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-17 01:02:09.065870 | orchestrator | Wednesday 17 September 2025 00:59:41 +0000 (0:00:00.996) 0:00:23.181 *** 2025-09-17 01:02:09.065883 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062890, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0895247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.065896 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062890, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0895247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.065908 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062910, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0932136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.065979 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062910, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0932136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.065994 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062890, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0895247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066006 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062890, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0895247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066107 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062890, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0895247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066125 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062890, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0895247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.066136 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062910, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0932136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066148 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062910, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0932136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066167 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062879, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0874546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066179 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062879, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0874546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066190 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062890, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0895247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066235 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062910, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0932136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066249 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062879, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0874546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066261 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062879, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0874546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066272 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062910, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0932136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066291 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062904, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0916655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066302 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062904, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0916655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066313 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062904, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0916655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066357 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062879, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0874546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066371 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062879, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0874546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066382 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062904, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0916655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066400 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062875, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0846653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066411 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062875, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0846653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066423 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062875, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0846653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066434 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062910, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0932136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.066450 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062904, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0916655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066492 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062893, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0896158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066505 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062875, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0846653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066521 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062875, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0846653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066532 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062893, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0896158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066541 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062904, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0916655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066552 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062893, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0896158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066569 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062893, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0896158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066606 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062893, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0896158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066618 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062900, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066634 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062900, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066644 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062879, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0874546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.066654 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062900, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066664 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062875, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0846653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066678 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062900, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066714 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062900, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066726 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062894, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0898478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066742 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062894, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0898478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066752 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062894, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0898478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066762 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062894, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0898478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066772 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062893, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0896158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066786 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062888, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0886655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066821 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062894, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0898478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066839 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062900, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066849 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062888, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0886655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066859 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062904, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0916655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.066869 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062888, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0886655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066879 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062908, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.092983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.066894 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062888, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0886655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067027 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062894, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0898478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067049 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062888, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0886655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067059 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062908, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.092983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067069 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062868, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0829835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067079 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062908, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.092983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067090 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062908, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.092983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067099 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062908, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.092983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067143 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062888, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0886655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067161 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062868, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0829835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067171 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062923, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067181 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062868, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0829835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067192 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062868, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0829835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067202 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062868, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0829835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067212 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062923, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067260 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062908, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.092983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067272 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062875, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0846653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.067282 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062923, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067292 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062907, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0925908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067302 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062923, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067312 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062868, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0829835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067322 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062907, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0925908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067347 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062923, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067358 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062923, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067368 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062907, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0925908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067378 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062907, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0925908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067388 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062907, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0925908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067398 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062907, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0925908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067408 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062876, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0856144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067439 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062876, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0856144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067450 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062876, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0856144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067460 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062876, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0856144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067470 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062893, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0896158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.067480 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062876, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0856144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067490 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062876, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0856144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067500 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062873, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0836654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067528 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062873, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0836654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067539 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062873, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0836654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067549 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062873, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0836654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067559 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062898, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0901835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067569 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062873, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0836654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067579 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062898, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0901835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067595 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062873, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0836654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062898, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0901835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067626 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062896, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067636 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062898, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0901835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067646 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062896, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067656 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062898, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0901835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067667 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062896, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067682 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062898, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0901835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067701 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062921, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067711 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.067721 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062900, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.067731 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062896, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067741 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062921, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067751 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.067761 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062896, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067771 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062896, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067786 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062921, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067796 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.067815 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062921, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067826 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.067836 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062894, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0898478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.067846 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062921, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067856 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.067866 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062921, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 01:02:09.067876 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.067886 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062888, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0886655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.067901 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062908, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.092983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.067911 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062868, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0829835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.067979 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062923, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.067993 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062907, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0925908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.068003 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062876, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0856144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.068014 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062873, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0836654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.068024 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062898, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0901835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.068044 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062896, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.090039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.068054 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062921, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 01:02:09.068064 | orchestrator | 2025-09-17 01:02:09.068074 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-17 01:02:09.068088 | orchestrator | Wednesday 17 September 2025 01:00:05 +0000 (0:00:24.514) 0:00:47.695 *** 2025-09-17 01:02:09.068099 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 01:02:09.068109 | orchestrator | 2025-09-17 01:02:09.068123 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-17 01:02:09.068133 | orchestrator | Wednesday 17 September 2025 01:00:06 +0000 (0:00:00.695) 0:00:48.391 *** 2025-09-17 01:02:09.068143 | orchestrator | [WARNING]: Skipped 2025-09-17 01:02:09.068153 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068163 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-17 01:02:09.068172 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068182 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-17 01:02:09.068192 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 01:02:09.068201 | orchestrator | [WARNING]: Skipped 2025-09-17 01:02:09.068211 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068221 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-17 01:02:09.068231 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068240 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-17 01:02:09.068250 | orchestrator | [WARNING]: Skipped 2025-09-17 01:02:09.068260 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068269 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-17 01:02:09.068279 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068289 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-17 01:02:09.068298 | orchestrator | [WARNING]: Skipped 2025-09-17 01:02:09.068308 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068316 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-17 01:02:09.068324 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068337 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-17 01:02:09.068345 | orchestrator | [WARNING]: Skipped 2025-09-17 01:02:09.068353 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068361 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-17 01:02:09.068369 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068377 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-17 01:02:09.068384 | orchestrator | [WARNING]: Skipped 2025-09-17 01:02:09.068392 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068400 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-17 01:02:09.068408 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068416 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-17 01:02:09.068424 | orchestrator | [WARNING]: Skipped 2025-09-17 01:02:09.068432 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068439 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-17 01:02:09.068447 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 01:02:09.068455 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-17 01:02:09.068463 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 01:02:09.068471 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-17 01:02:09.068479 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-17 01:02:09.068487 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-17 01:02:09.068494 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 01:02:09.068502 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-17 01:02:09.068510 | orchestrator | 2025-09-17 01:02:09.068518 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-17 01:02:09.068526 | orchestrator | Wednesday 17 September 2025 01:00:08 +0000 (0:00:01.794) 0:00:50.186 *** 2025-09-17 01:02:09.068534 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 01:02:09.068542 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.068550 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 01:02:09.068558 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.068566 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 01:02:09.068574 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.068582 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 01:02:09.068590 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.068597 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 01:02:09.068605 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.068613 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 01:02:09.068621 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.068629 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-17 01:02:09.068637 | orchestrator | 2025-09-17 01:02:09.068645 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-17 01:02:09.068656 | orchestrator | Wednesday 17 September 2025 01:00:21 +0000 (0:00:12.851) 0:01:03.037 *** 2025-09-17 01:02:09.068665 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 01:02:09.068676 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 01:02:09.068695 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.068703 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.068711 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 01:02:09.068719 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.068727 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 01:02:09.068734 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.068742 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 01:02:09.068750 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.068758 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 01:02:09.068766 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.068774 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-17 01:02:09.068782 | orchestrator | 2025-09-17 01:02:09.068789 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-17 01:02:09.068797 | orchestrator | Wednesday 17 September 2025 01:00:23 +0000 (0:00:02.537) 0:01:05.575 *** 2025-09-17 01:02:09.068805 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 01:02:09.068814 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 01:02:09.068822 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 01:02:09.068829 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.068837 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.068845 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.068853 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 01:02:09.068861 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.068869 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 01:02:09.068877 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.068885 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 01:02:09.068893 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.068901 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-17 01:02:09.068909 | orchestrator | 2025-09-17 01:02:09.068916 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-17 01:02:09.068924 | orchestrator | Wednesday 17 September 2025 01:00:25 +0000 (0:00:01.396) 0:01:06.972 *** 2025-09-17 01:02:09.068947 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 01:02:09.068955 | orchestrator | 2025-09-17 01:02:09.068962 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-17 01:02:09.068970 | orchestrator | Wednesday 17 September 2025 01:00:25 +0000 (0:00:00.710) 0:01:07.683 *** 2025-09-17 01:02:09.068978 | orchestrator | skipping: [testbed-manager] 2025-09-17 01:02:09.068986 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.068994 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.069001 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.069009 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.069017 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.069025 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.069033 | orchestrator | 2025-09-17 01:02:09.069040 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-17 01:02:09.069053 | orchestrator | Wednesday 17 September 2025 01:00:26 +0000 (0:00:00.649) 0:01:08.332 *** 2025-09-17 01:02:09.069061 | orchestrator | skipping: [testbed-manager] 2025-09-17 01:02:09.069069 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.069077 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.069084 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.069092 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:02:09.069100 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:09.069108 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:02:09.069116 | orchestrator | 2025-09-17 01:02:09.069123 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-17 01:02:09.069131 | orchestrator | Wednesday 17 September 2025 01:00:28 +0000 (0:00:02.156) 0:01:10.488 *** 2025-09-17 01:02:09.069139 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 01:02:09.069147 | orchestrator | skipping: [testbed-manager] 2025-09-17 01:02:09.069155 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 01:02:09.069163 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.069171 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 01:02:09.069178 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.069190 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 01:02:09.069198 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.069206 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 01:02:09.069217 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 01:02:09.069225 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.069233 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.069241 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 01:02:09.069249 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.069257 | orchestrator | 2025-09-17 01:02:09.069265 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-17 01:02:09.069272 | orchestrator | Wednesday 17 September 2025 01:00:30 +0000 (0:00:01.875) 0:01:12.364 *** 2025-09-17 01:02:09.069280 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 01:02:09.069288 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.069296 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 01:02:09.069304 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.069312 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 01:02:09.069320 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.069328 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 01:02:09.069336 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.069343 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 01:02:09.069351 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.069359 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 01:02:09.069367 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.069375 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-17 01:02:09.069383 | orchestrator | 2025-09-17 01:02:09.069390 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-17 01:02:09.069399 | orchestrator | Wednesday 17 September 2025 01:00:32 +0000 (0:00:01.453) 0:01:13.818 *** 2025-09-17 01:02:09.069413 | orchestrator | [WARNING]: Skipped 2025-09-17 01:02:09.069421 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-17 01:02:09.069429 | orchestrator | due to this access issue: 2025-09-17 01:02:09.069437 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-17 01:02:09.069444 | orchestrator | not a directory 2025-09-17 01:02:09.069452 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 01:02:09.069460 | orchestrator | 2025-09-17 01:02:09.069468 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-17 01:02:09.069476 | orchestrator | Wednesday 17 September 2025 01:00:33 +0000 (0:00:01.000) 0:01:14.818 *** 2025-09-17 01:02:09.069484 | orchestrator | skipping: [testbed-manager] 2025-09-17 01:02:09.069491 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.069499 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.069507 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.069515 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.069523 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.069530 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.069538 | orchestrator | 2025-09-17 01:02:09.069546 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-17 01:02:09.069554 | orchestrator | Wednesday 17 September 2025 01:00:33 +0000 (0:00:00.779) 0:01:15.598 *** 2025-09-17 01:02:09.069562 | orchestrator | skipping: [testbed-manager] 2025-09-17 01:02:09.069570 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:09.069577 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:09.069585 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:09.069593 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:02:09.069600 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:02:09.069608 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:02:09.069616 | orchestrator | 2025-09-17 01:02:09.069624 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-17 01:02:09.069632 | orchestrator | Wednesday 17 September 2025 01:00:34 +0000 (0:00:00.577) 0:01:16.175 *** 2025-09-17 01:02:09.069641 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 01:02:09.069659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.069668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.069681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.069690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.069698 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.069706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.069714 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 01:02:09.069723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.069740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.069749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.069763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.069771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.069779 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.069788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.069796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.069804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.069820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.069834 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 01:02:09.069843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.069851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.069859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.069868 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.069876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.069891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.069905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 01:02:09.069913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.069921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.069944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 01:02:09.069952 | orchestrator | 2025-09-17 01:02:09.069960 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-17 01:02:09.069968 | orchestrator | Wednesday 17 September 2025 01:00:38 +0000 (0:00:03.587) 0:01:19.763 *** 2025-09-17 01:02:09.069976 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-17 01:02:09.069984 | orchestrator | skipping: [testbed-manager] 2025-09-17 01:02:09.069991 | orchestrator | 2025-09-17 01:02:09.069999 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 01:02:09.070007 | orchestrator | Wednesday 17 September 2025 01:00:38 +0000 (0:00:00.971) 0:01:20.735 *** 2025-09-17 01:02:09.070035 | orchestrator | 2025-09-17 01:02:09.070043 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 01:02:09.070051 | orchestrator | Wednesday 17 September 2025 01:00:39 +0000 (0:00:00.080) 0:01:20.815 *** 2025-09-17 01:02:09.070060 | orchestrator | 2025-09-17 01:02:09.070068 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 01:02:09.070076 | orchestrator | Wednesday 17 September 2025 01:00:39 +0000 (0:00:00.070) 0:01:20.886 *** 2025-09-17 01:02:09.070083 | orchestrator | 2025-09-17 01:02:09.070091 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 01:02:09.070099 | orchestrator | Wednesday 17 September 2025 01:00:39 +0000 (0:00:00.064) 0:01:20.950 *** 2025-09-17 01:02:09.070106 | orchestrator | 2025-09-17 01:02:09.070114 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 01:02:09.070122 | orchestrator | Wednesday 17 September 2025 01:00:39 +0000 (0:00:00.209) 0:01:21.160 *** 2025-09-17 01:02:09.070130 | orchestrator | 2025-09-17 01:02:09.070137 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 01:02:09.070151 | orchestrator | Wednesday 17 September 2025 01:00:39 +0000 (0:00:00.062) 0:01:21.222 *** 2025-09-17 01:02:09.070159 | orchestrator | 2025-09-17 01:02:09.070167 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 01:02:09.070174 | orchestrator | Wednesday 17 September 2025 01:00:39 +0000 (0:00:00.061) 0:01:21.284 *** 2025-09-17 01:02:09.070182 | orchestrator | 2025-09-17 01:02:09.070190 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-17 01:02:09.070197 | orchestrator | Wednesday 17 September 2025 01:00:39 +0000 (0:00:00.085) 0:01:21.369 *** 2025-09-17 01:02:09.070205 | orchestrator | changed: [testbed-manager] 2025-09-17 01:02:09.070213 | orchestrator | 2025-09-17 01:02:09.070224 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-17 01:02:09.070232 | orchestrator | Wednesday 17 September 2025 01:00:54 +0000 (0:00:15.075) 0:01:36.445 *** 2025-09-17 01:02:09.070244 | orchestrator | changed: [testbed-manager] 2025-09-17 01:02:09.070252 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:02:09.070260 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:02:09.070268 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:09.070275 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:02:09.070283 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:02:09.070291 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:02:09.070298 | orchestrator | 2025-09-17 01:02:09.070306 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-17 01:02:09.070314 | orchestrator | Wednesday 17 September 2025 01:01:08 +0000 (0:00:14.224) 0:01:50.670 *** 2025-09-17 01:02:09.070322 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:02:09.070329 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:02:09.070337 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:09.070344 | orchestrator | 2025-09-17 01:02:09.070352 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-17 01:02:09.070360 | orchestrator | Wednesday 17 September 2025 01:01:15 +0000 (0:00:06.479) 0:01:57.150 *** 2025-09-17 01:02:09.070368 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:02:09.070375 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:09.070383 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:02:09.070391 | orchestrator | 2025-09-17 01:02:09.070398 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-17 01:02:09.070406 | orchestrator | Wednesday 17 September 2025 01:01:26 +0000 (0:00:10.919) 0:02:08.069 *** 2025-09-17 01:02:09.070414 | orchestrator | changed: [testbed-manager] 2025-09-17 01:02:09.070421 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:02:09.070429 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:02:09.070437 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:02:09.070444 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:02:09.070452 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:09.070459 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:02:09.070467 | orchestrator | 2025-09-17 01:02:09.070475 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-17 01:02:09.070483 | orchestrator | Wednesday 17 September 2025 01:01:41 +0000 (0:00:15.437) 0:02:23.507 *** 2025-09-17 01:02:09.070490 | orchestrator | changed: [testbed-manager] 2025-09-17 01:02:09.070498 | orchestrator | 2025-09-17 01:02:09.070506 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-17 01:02:09.070514 | orchestrator | Wednesday 17 September 2025 01:01:48 +0000 (0:00:07.059) 0:02:30.566 *** 2025-09-17 01:02:09.070521 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:02:09.070529 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:02:09.070537 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:09.070544 | orchestrator | 2025-09-17 01:02:09.070552 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-17 01:02:09.070560 | orchestrator | Wednesday 17 September 2025 01:01:54 +0000 (0:00:06.035) 0:02:36.602 *** 2025-09-17 01:02:09.070573 | orchestrator | changed: [testbed-manager] 2025-09-17 01:02:09.070581 | orchestrator | 2025-09-17 01:02:09.070589 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-17 01:02:09.070597 | orchestrator | Wednesday 17 September 2025 01:02:00 +0000 (0:00:05.837) 0:02:42.439 *** 2025-09-17 01:02:09.070604 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:02:09.070612 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:02:09.070620 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:02:09.070627 | orchestrator | 2025-09-17 01:02:09.070635 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 01:02:09.070643 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-17 01:02:09.070651 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 01:02:09.070659 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 01:02:09.070667 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 01:02:09.070675 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-17 01:02:09.070683 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-17 01:02:09.070691 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-17 01:02:09.070699 | orchestrator | 2025-09-17 01:02:09.070707 | orchestrator | 2025-09-17 01:02:09.070714 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 01:02:09.070722 | orchestrator | Wednesday 17 September 2025 01:02:06 +0000 (0:00:06.160) 0:02:48.599 *** 2025-09-17 01:02:09.070730 | orchestrator | =============================================================================== 2025-09-17 01:02:09.070738 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.51s 2025-09-17 01:02:09.070745 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.44s 2025-09-17 01:02:09.070753 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.08s 2025-09-17 01:02:09.070767 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.22s 2025-09-17 01:02:09.070775 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 12.85s 2025-09-17 01:02:09.070786 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.92s 2025-09-17 01:02:09.070794 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.06s 2025-09-17 01:02:09.070802 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.48s 2025-09-17 01:02:09.070810 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.33s 2025-09-17 01:02:09.070818 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.16s 2025-09-17 01:02:09.070826 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.04s 2025-09-17 01:02:09.070833 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.84s 2025-09-17 01:02:09.070841 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.69s 2025-09-17 01:02:09.070849 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.59s 2025-09-17 01:02:09.070856 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.92s 2025-09-17 01:02:09.070864 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.54s 2025-09-17 01:02:09.070876 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.16s 2025-09-17 01:02:09.070884 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.88s 2025-09-17 01:02:09.070892 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.81s 2025-09-17 01:02:09.070899 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.79s 2025-09-17 01:02:09.070907 | orchestrator | 2025-09-17 01:02:09 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:09.070915 | orchestrator | 2025-09-17 01:02:09 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:09.070923 | orchestrator | 2025-09-17 01:02:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:12.093801 | orchestrator | 2025-09-17 01:02:12 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:12.094528 | orchestrator | 2025-09-17 01:02:12 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:12.095169 | orchestrator | 2025-09-17 01:02:12 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:12.095922 | orchestrator | 2025-09-17 01:02:12 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:12.096254 | orchestrator | 2025-09-17 01:02:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:15.135562 | orchestrator | 2025-09-17 01:02:15 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:15.137873 | orchestrator | 2025-09-17 01:02:15 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:15.138992 | orchestrator | 2025-09-17 01:02:15 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:15.141092 | orchestrator | 2025-09-17 01:02:15 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:15.141327 | orchestrator | 2025-09-17 01:02:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:18.175059 | orchestrator | 2025-09-17 01:02:18 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:18.175793 | orchestrator | 2025-09-17 01:02:18 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:18.176831 | orchestrator | 2025-09-17 01:02:18 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:18.178147 | orchestrator | 2025-09-17 01:02:18 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:18.178171 | orchestrator | 2025-09-17 01:02:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:21.217536 | orchestrator | 2025-09-17 01:02:21 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:21.218522 | orchestrator | 2025-09-17 01:02:21 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:21.219980 | orchestrator | 2025-09-17 01:02:21 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:21.221344 | orchestrator | 2025-09-17 01:02:21 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:21.221365 | orchestrator | 2025-09-17 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:24.262396 | orchestrator | 2025-09-17 01:02:24 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:24.262667 | orchestrator | 2025-09-17 01:02:24 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:24.263439 | orchestrator | 2025-09-17 01:02:24 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:24.263904 | orchestrator | 2025-09-17 01:02:24 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:24.264008 | orchestrator | 2025-09-17 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:27.292331 | orchestrator | 2025-09-17 01:02:27 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:27.292860 | orchestrator | 2025-09-17 01:02:27 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:27.293769 | orchestrator | 2025-09-17 01:02:27 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:27.294553 | orchestrator | 2025-09-17 01:02:27 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:27.294579 | orchestrator | 2025-09-17 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:30.325045 | orchestrator | 2025-09-17 01:02:30 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:30.325491 | orchestrator | 2025-09-17 01:02:30 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:30.326996 | orchestrator | 2025-09-17 01:02:30 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:30.328587 | orchestrator | 2025-09-17 01:02:30 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:30.328610 | orchestrator | 2025-09-17 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:33.376857 | orchestrator | 2025-09-17 01:02:33 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:33.378167 | orchestrator | 2025-09-17 01:02:33 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:33.379716 | orchestrator | 2025-09-17 01:02:33 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:33.380886 | orchestrator | 2025-09-17 01:02:33 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:33.380914 | orchestrator | 2025-09-17 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:36.416656 | orchestrator | 2025-09-17 01:02:36 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:36.416770 | orchestrator | 2025-09-17 01:02:36 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:36.419735 | orchestrator | 2025-09-17 01:02:36 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:36.420337 | orchestrator | 2025-09-17 01:02:36 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:36.420359 | orchestrator | 2025-09-17 01:02:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:39.446583 | orchestrator | 2025-09-17 01:02:39 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:39.448582 | orchestrator | 2025-09-17 01:02:39 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:39.450451 | orchestrator | 2025-09-17 01:02:39 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:39.452234 | orchestrator | 2025-09-17 01:02:39 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:39.452286 | orchestrator | 2025-09-17 01:02:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:42.496725 | orchestrator | 2025-09-17 01:02:42 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:42.498474 | orchestrator | 2025-09-17 01:02:42 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:42.499867 | orchestrator | 2025-09-17 01:02:42 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:42.501630 | orchestrator | 2025-09-17 01:02:42 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:42.501870 | orchestrator | 2025-09-17 01:02:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:45.541041 | orchestrator | 2025-09-17 01:02:45 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:45.541877 | orchestrator | 2025-09-17 01:02:45 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:45.542890 | orchestrator | 2025-09-17 01:02:45 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:45.544182 | orchestrator | 2025-09-17 01:02:45 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:45.544282 | orchestrator | 2025-09-17 01:02:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:48.588501 | orchestrator | 2025-09-17 01:02:48 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:48.589495 | orchestrator | 2025-09-17 01:02:48 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:48.591227 | orchestrator | 2025-09-17 01:02:48 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:48.592611 | orchestrator | 2025-09-17 01:02:48 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:48.592711 | orchestrator | 2025-09-17 01:02:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:51.629219 | orchestrator | 2025-09-17 01:02:51 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:51.630867 | orchestrator | 2025-09-17 01:02:51 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:51.632128 | orchestrator | 2025-09-17 01:02:51 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:51.635147 | orchestrator | 2025-09-17 01:02:51 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:51.635194 | orchestrator | 2025-09-17 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:54.674794 | orchestrator | 2025-09-17 01:02:54 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:54.676789 | orchestrator | 2025-09-17 01:02:54 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state STARTED 2025-09-17 01:02:54.679273 | orchestrator | 2025-09-17 01:02:54 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:54.680872 | orchestrator | 2025-09-17 01:02:54 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:54.681160 | orchestrator | 2025-09-17 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:02:57.723487 | orchestrator | 2025-09-17 01:02:57 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:02:57.723772 | orchestrator | 2025-09-17 01:02:57 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:02:57.725264 | orchestrator | 2025-09-17 01:02:57.725285 | orchestrator | 2025-09-17 01:02:57 | INFO  | Task 8115ab2f-f9d2-4cbd-b361-0ed2b11c3b68 is in state SUCCESS 2025-09-17 01:02:57.726560 | orchestrator | 2025-09-17 01:02:57.726589 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 01:02:57.726622 | orchestrator | 2025-09-17 01:02:57.726632 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 01:02:57.726641 | orchestrator | Wednesday 17 September 2025 01:00:14 +0000 (0:00:00.234) 0:00:00.234 *** 2025-09-17 01:02:57.726650 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:02:57.726661 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:02:57.726669 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:02:57.726678 | orchestrator | 2025-09-17 01:02:57.726687 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 01:02:57.726695 | orchestrator | Wednesday 17 September 2025 01:00:14 +0000 (0:00:00.257) 0:00:00.491 *** 2025-09-17 01:02:57.726704 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-17 01:02:57.726713 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-17 01:02:57.726721 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-17 01:02:57.726730 | orchestrator | 2025-09-17 01:02:57.726739 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-17 01:02:57.726747 | orchestrator | 2025-09-17 01:02:57.726756 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-17 01:02:57.726764 | orchestrator | Wednesday 17 September 2025 01:00:14 +0000 (0:00:00.336) 0:00:00.828 *** 2025-09-17 01:02:57.726773 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:02:57.726783 | orchestrator | 2025-09-17 01:02:57.726791 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-17 01:02:57.726800 | orchestrator | Wednesday 17 September 2025 01:00:15 +0000 (0:00:00.485) 0:00:01.314 *** 2025-09-17 01:02:57.726808 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-17 01:02:57.726817 | orchestrator | 2025-09-17 01:02:57.726825 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-17 01:02:57.726834 | orchestrator | Wednesday 17 September 2025 01:00:19 +0000 (0:00:03.853) 0:00:05.167 *** 2025-09-17 01:02:57.726843 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-17 01:02:57.726865 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-17 01:02:57.726874 | orchestrator | 2025-09-17 01:02:57.726883 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-17 01:02:57.726892 | orchestrator | Wednesday 17 September 2025 01:00:25 +0000 (0:00:06.791) 0:00:11.959 *** 2025-09-17 01:02:57.726900 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 01:02:57.726909 | orchestrator | 2025-09-17 01:02:57.726918 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-17 01:02:57.726963 | orchestrator | Wednesday 17 September 2025 01:00:29 +0000 (0:00:03.650) 0:00:15.609 *** 2025-09-17 01:02:57.726973 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 01:02:57.726982 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-17 01:02:57.726991 | orchestrator | 2025-09-17 01:02:57.726999 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-17 01:02:57.727008 | orchestrator | Wednesday 17 September 2025 01:00:34 +0000 (0:00:04.536) 0:00:20.145 *** 2025-09-17 01:02:57.727016 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 01:02:57.727025 | orchestrator | 2025-09-17 01:02:57.727034 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-17 01:02:57.727042 | orchestrator | Wednesday 17 September 2025 01:00:37 +0000 (0:00:03.256) 0:00:23.402 *** 2025-09-17 01:02:57.727051 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-17 01:02:57.727060 | orchestrator | 2025-09-17 01:02:57.727068 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-17 01:02:57.727077 | orchestrator | Wednesday 17 September 2025 01:00:40 +0000 (0:00:03.606) 0:00:27.008 *** 2025-09-17 01:02:57.727109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.727128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.727139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.727154 | orchestrator | 2025-09-17 01:02:57.727165 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-17 01:02:57.727266 | orchestrator | Wednesday 17 September 2025 01:00:46 +0000 (0:00:05.293) 0:00:32.302 *** 2025-09-17 01:02:57.727279 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:02:57.727289 | orchestrator | 2025-09-17 01:02:57.727306 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-17 01:02:57.727316 | orchestrator | Wednesday 17 September 2025 01:00:46 +0000 (0:00:00.624) 0:00:32.926 *** 2025-09-17 01:02:57.727327 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:02:57.727336 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:57.727347 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:02:57.727356 | orchestrator | 2025-09-17 01:02:57.727367 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-17 01:02:57.727378 | orchestrator | Wednesday 17 September 2025 01:00:51 +0000 (0:00:04.139) 0:00:37.066 *** 2025-09-17 01:02:57.727388 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 01:02:57.727398 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 01:02:57.727409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 01:02:57.727419 | orchestrator | 2025-09-17 01:02:57.727428 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-17 01:02:57.727438 | orchestrator | Wednesday 17 September 2025 01:00:53 +0000 (0:00:02.120) 0:00:39.187 *** 2025-09-17 01:02:57.727448 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 01:02:57.727459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 01:02:57.727469 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 01:02:57.727479 | orchestrator | 2025-09-17 01:02:57.727489 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-17 01:02:57.727499 | orchestrator | Wednesday 17 September 2025 01:00:54 +0000 (0:00:01.316) 0:00:40.504 *** 2025-09-17 01:02:57.727509 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:02:57.727520 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:02:57.727528 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:02:57.727537 | orchestrator | 2025-09-17 01:02:57.727546 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-17 01:02:57.727560 | orchestrator | Wednesday 17 September 2025 01:00:55 +0000 (0:00:01.086) 0:00:41.591 *** 2025-09-17 01:02:57.727569 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.727577 | orchestrator | 2025-09-17 01:02:57.727593 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-17 01:02:57.727602 | orchestrator | Wednesday 17 September 2025 01:00:56 +0000 (0:00:00.706) 0:00:42.297 *** 2025-09-17 01:02:57.727610 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.727619 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.727627 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.727636 | orchestrator | 2025-09-17 01:02:57.727644 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-17 01:02:57.727653 | orchestrator | Wednesday 17 September 2025 01:00:57 +0000 (0:00:00.770) 0:00:43.068 *** 2025-09-17 01:02:57.727662 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:02:57.727670 | orchestrator | 2025-09-17 01:02:57.727679 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-17 01:02:57.727688 | orchestrator | Wednesday 17 September 2025 01:00:58 +0000 (0:00:01.265) 0:00:44.334 *** 2025-09-17 01:02:57.727702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.727714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.727729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.727739 | orchestrator | 2025-09-17 01:02:57.727748 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-17 01:02:57.727757 | orchestrator | Wednesday 17 September 2025 01:01:03 +0000 (0:00:05.172) 0:00:49.506 *** 2025-09-17 01:02:57.727800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 01:02:57.727817 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.727831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 01:02:57.727841 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.727857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 01:02:57.727867 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.727876 | orchestrator | 2025-09-17 01:02:57.727885 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-17 01:02:57.727893 | orchestrator | Wednesday 17 September 2025 01:01:06 +0000 (0:00:02.687) 0:00:52.194 *** 2025-09-17 01:02:57.727907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 01:02:57.727921 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.727971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 01:02:57.727981 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.727995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 01:02:57.728017 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.728026 | orchestrator | 2025-09-17 01:02:57.728034 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-17 01:02:57.728043 | orchestrator | Wednesday 17 September 2025 01:01:08 +0000 (0:00:02.726) 0:00:54.921 *** 2025-09-17 01:02:57.728052 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.728060 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.728069 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.728077 | orchestrator | 2025-09-17 01:02:57.728086 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-17 01:02:57.728095 | orchestrator | Wednesday 17 September 2025 01:01:13 +0000 (0:00:04.198) 0:00:59.119 *** 2025-09-17 01:02:57.728108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.728123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.728142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.728152 | orchestrator | 2025-09-17 01:02:57.728160 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-17 01:02:57.728169 | orchestrator | Wednesday 17 September 2025 01:01:16 +0000 (0:00:03.641) 0:01:02.761 *** 2025-09-17 01:02:57.728177 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:57.728186 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:02:57.728194 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:02:57.728203 | orchestrator | 2025-09-17 01:02:57.728211 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-17 01:02:57.728220 | orchestrator | Wednesday 17 September 2025 01:01:21 +0000 (0:00:04.821) 0:01:07.583 *** 2025-09-17 01:02:57.728228 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.728237 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.728326 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.728336 | orchestrator | 2025-09-17 01:02:57.728345 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-17 01:02:57.728359 | orchestrator | Wednesday 17 September 2025 01:01:24 +0000 (0:00:02.700) 0:01:10.283 *** 2025-09-17 01:02:57.728376 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.728384 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.728393 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.728402 | orchestrator | 2025-09-17 01:02:57.728411 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-17 01:02:57.728419 | orchestrator | Wednesday 17 September 2025 01:01:28 +0000 (0:00:04.350) 0:01:14.633 *** 2025-09-17 01:02:57.728428 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.728437 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.728445 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.728454 | orchestrator | 2025-09-17 01:02:57.728462 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-17 01:02:57.728471 | orchestrator | Wednesday 17 September 2025 01:01:33 +0000 (0:00:04.910) 0:01:19.544 *** 2025-09-17 01:02:57.728480 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.728488 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.728497 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.728505 | orchestrator | 2025-09-17 01:02:57.728514 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-17 01:02:57.728523 | orchestrator | Wednesday 17 September 2025 01:01:37 +0000 (0:00:03.831) 0:01:23.375 *** 2025-09-17 01:02:57.728531 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.728540 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.728548 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.728557 | orchestrator | 2025-09-17 01:02:57.728566 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-17 01:02:57.728574 | orchestrator | Wednesday 17 September 2025 01:01:37 +0000 (0:00:00.279) 0:01:23.654 *** 2025-09-17 01:02:57.728583 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-17 01:02:57.728592 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.728600 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-17 01:02:57.728609 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.728618 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-17 01:02:57.728626 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.728635 | orchestrator | 2025-09-17 01:02:57.728649 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-17 01:02:57.728657 | orchestrator | Wednesday 17 September 2025 01:01:40 +0000 (0:00:03.326) 0:01:26.980 *** 2025-09-17 01:02:57.728667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.728690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.728705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 01:02:57.728715 | orchestrator | 2025-09-17 01:02:57.728724 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-17 01:02:57.728733 | orchestrator | Wednesday 17 September 2025 01:01:44 +0000 (0:00:03.933) 0:01:30.914 *** 2025-09-17 01:02:57.728747 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:02:57.728755 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:02:57.728764 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:02:57.728773 | orchestrator | 2025-09-17 01:02:57.728781 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-17 01:02:57.728790 | orchestrator | Wednesday 17 September 2025 01:01:45 +0000 (0:00:00.259) 0:01:31.173 *** 2025-09-17 01:02:57.728798 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:57.728807 | orchestrator | 2025-09-17 01:02:57.728816 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-17 01:02:57.728824 | orchestrator | Wednesday 17 September 2025 01:01:47 +0000 (0:00:02.350) 0:01:33.524 *** 2025-09-17 01:02:57.728833 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:57.728841 | orchestrator | 2025-09-17 01:02:57.728850 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-17 01:02:57.728858 | orchestrator | Wednesday 17 September 2025 01:01:50 +0000 (0:00:02.689) 0:01:36.214 *** 2025-09-17 01:02:57.728867 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:57.728876 | orchestrator | 2025-09-17 01:02:57.728884 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-17 01:02:57.728893 | orchestrator | Wednesday 17 September 2025 01:01:52 +0000 (0:00:02.624) 0:01:38.838 *** 2025-09-17 01:02:57.728901 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:57.728910 | orchestrator | 2025-09-17 01:02:57.728918 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-17 01:02:57.728944 | orchestrator | Wednesday 17 September 2025 01:02:20 +0000 (0:00:27.742) 0:02:06.581 *** 2025-09-17 01:02:57.728954 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:57.728964 | orchestrator | 2025-09-17 01:02:57.728979 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-17 01:02:57.728990 | orchestrator | Wednesday 17 September 2025 01:02:22 +0000 (0:00:02.197) 0:02:08.778 *** 2025-09-17 01:02:57.729000 | orchestrator | 2025-09-17 01:02:57.729010 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-17 01:02:57.729021 | orchestrator | Wednesday 17 September 2025 01:02:22 +0000 (0:00:00.070) 0:02:08.849 *** 2025-09-17 01:02:57.729031 | orchestrator | 2025-09-17 01:02:57.729041 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-17 01:02:57.729051 | orchestrator | Wednesday 17 September 2025 01:02:22 +0000 (0:00:00.063) 0:02:08.912 *** 2025-09-17 01:02:57.729061 | orchestrator | 2025-09-17 01:02:57.729072 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-17 01:02:57.729081 | orchestrator | Wednesday 17 September 2025 01:02:22 +0000 (0:00:00.076) 0:02:08.988 *** 2025-09-17 01:02:57.729091 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:02:57.729101 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:02:57.729111 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:02:57.729121 | orchestrator | 2025-09-17 01:02:57.729130 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 01:02:57.729142 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-17 01:02:57.729154 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 01:02:57.729164 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 01:02:57.729175 | orchestrator | 2025-09-17 01:02:57.729185 | orchestrator | 2025-09-17 01:02:57.729195 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 01:02:57.729205 | orchestrator | Wednesday 17 September 2025 01:02:55 +0000 (0:00:32.382) 0:02:41.370 *** 2025-09-17 01:02:57.729215 | orchestrator | =============================================================================== 2025-09-17 01:02:57.729231 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.38s 2025-09-17 01:02:57.729246 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.74s 2025-09-17 01:02:57.729256 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.79s 2025-09-17 01:02:57.729266 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.29s 2025-09-17 01:02:57.729276 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.17s 2025-09-17 01:02:57.729285 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.91s 2025-09-17 01:02:57.729295 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 4.82s 2025-09-17 01:02:57.729305 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.54s 2025-09-17 01:02:57.729315 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.35s 2025-09-17 01:02:57.729324 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.20s 2025-09-17 01:02:57.729332 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.14s 2025-09-17 01:02:57.729341 | orchestrator | glance : Check glance containers ---------------------------------------- 3.93s 2025-09-17 01:02:57.729349 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.85s 2025-09-17 01:02:57.729358 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.83s 2025-09-17 01:02:57.729366 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.65s 2025-09-17 01:02:57.729375 | orchestrator | glance : Copying over config.json files for services -------------------- 3.64s 2025-09-17 01:02:57.729383 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.61s 2025-09-17 01:02:57.729392 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.33s 2025-09-17 01:02:57.729400 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.26s 2025-09-17 01:02:57.729409 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 2.73s 2025-09-17 01:02:57.729418 | orchestrator | 2025-09-17 01:02:57 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:02:57.729426 | orchestrator | 2025-09-17 01:02:57 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:02:57.729435 | orchestrator | 2025-09-17 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:00.775038 | orchestrator | 2025-09-17 01:03:00 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:00.776823 | orchestrator | 2025-09-17 01:03:00 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:00.782694 | orchestrator | 2025-09-17 01:03:00 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:00.784977 | orchestrator | 2025-09-17 01:03:00 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:00.784993 | orchestrator | 2025-09-17 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:03.832514 | orchestrator | 2025-09-17 01:03:03 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:03.833218 | orchestrator | 2025-09-17 01:03:03 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:03.835383 | orchestrator | 2025-09-17 01:03:03 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:03.836368 | orchestrator | 2025-09-17 01:03:03 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:03.836634 | orchestrator | 2025-09-17 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:06.878070 | orchestrator | 2025-09-17 01:03:06 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:06.896374 | orchestrator | 2025-09-17 01:03:06 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:06.896435 | orchestrator | 2025-09-17 01:03:06 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:06.896447 | orchestrator | 2025-09-17 01:03:06 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:06.896459 | orchestrator | 2025-09-17 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:09.925318 | orchestrator | 2025-09-17 01:03:09 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:09.925419 | orchestrator | 2025-09-17 01:03:09 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:09.926005 | orchestrator | 2025-09-17 01:03:09 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:09.927034 | orchestrator | 2025-09-17 01:03:09 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:09.927072 | orchestrator | 2025-09-17 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:12.964034 | orchestrator | 2025-09-17 01:03:12 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:12.964157 | orchestrator | 2025-09-17 01:03:12 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:12.966569 | orchestrator | 2025-09-17 01:03:12 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:12.967464 | orchestrator | 2025-09-17 01:03:12 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:12.967516 | orchestrator | 2025-09-17 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:16.011828 | orchestrator | 2025-09-17 01:03:16 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:16.014258 | orchestrator | 2025-09-17 01:03:16 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:16.017613 | orchestrator | 2025-09-17 01:03:16 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:16.019415 | orchestrator | 2025-09-17 01:03:16 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:16.019716 | orchestrator | 2025-09-17 01:03:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:19.060668 | orchestrator | 2025-09-17 01:03:19 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:19.061747 | orchestrator | 2025-09-17 01:03:19 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:19.063034 | orchestrator | 2025-09-17 01:03:19 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:19.065192 | orchestrator | 2025-09-17 01:03:19 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:19.065421 | orchestrator | 2025-09-17 01:03:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:22.107585 | orchestrator | 2025-09-17 01:03:22 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:22.108768 | orchestrator | 2025-09-17 01:03:22 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:22.110116 | orchestrator | 2025-09-17 01:03:22 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:22.111201 | orchestrator | 2025-09-17 01:03:22 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:22.111246 | orchestrator | 2025-09-17 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:25.155862 | orchestrator | 2025-09-17 01:03:25 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:25.157748 | orchestrator | 2025-09-17 01:03:25 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:25.159482 | orchestrator | 2025-09-17 01:03:25 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:25.160596 | orchestrator | 2025-09-17 01:03:25 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:25.160620 | orchestrator | 2025-09-17 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:28.201364 | orchestrator | 2025-09-17 01:03:28 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:28.202698 | orchestrator | 2025-09-17 01:03:28 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:28.205571 | orchestrator | 2025-09-17 01:03:28 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:28.208089 | orchestrator | 2025-09-17 01:03:28 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:28.208386 | orchestrator | 2025-09-17 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:31.244799 | orchestrator | 2025-09-17 01:03:31 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:31.245140 | orchestrator | 2025-09-17 01:03:31 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:31.246688 | orchestrator | 2025-09-17 01:03:31 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:31.248112 | orchestrator | 2025-09-17 01:03:31 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:31.248142 | orchestrator | 2025-09-17 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:34.290617 | orchestrator | 2025-09-17 01:03:34 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:34.290728 | orchestrator | 2025-09-17 01:03:34 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:34.292453 | orchestrator | 2025-09-17 01:03:34 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:34.293168 | orchestrator | 2025-09-17 01:03:34 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:34.293201 | orchestrator | 2025-09-17 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:37.336253 | orchestrator | 2025-09-17 01:03:37 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:37.338012 | orchestrator | 2025-09-17 01:03:37 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:37.339686 | orchestrator | 2025-09-17 01:03:37 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state STARTED 2025-09-17 01:03:37.341383 | orchestrator | 2025-09-17 01:03:37 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:37.341594 | orchestrator | 2025-09-17 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:40.379416 | orchestrator | 2025-09-17 01:03:40 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:40.381153 | orchestrator | 2025-09-17 01:03:40 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:03:40.383135 | orchestrator | 2025-09-17 01:03:40 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:40.385771 | orchestrator | 2025-09-17 01:03:40 | INFO  | Task 4ac42797-57d2-4fc7-ade2-959ec6812426 is in state SUCCESS 2025-09-17 01:03:40.387599 | orchestrator | 2025-09-17 01:03:40.387634 | orchestrator | 2025-09-17 01:03:40.387646 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 01:03:40.387659 | orchestrator | 2025-09-17 01:03:40.387670 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 01:03:40.387681 | orchestrator | Wednesday 17 September 2025 01:00:52 +0000 (0:00:00.293) 0:00:00.293 *** 2025-09-17 01:03:40.387692 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:03:40.387704 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:03:40.387715 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:03:40.387726 | orchestrator | ok: [testbed-node-3] 2025-09-17 01:03:40.387736 | orchestrator | ok: [testbed-node-4] 2025-09-17 01:03:40.387856 | orchestrator | ok: [testbed-node-5] 2025-09-17 01:03:40.387870 | orchestrator | 2025-09-17 01:03:40.387988 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 01:03:40.388349 | orchestrator | Wednesday 17 September 2025 01:00:53 +0000 (0:00:00.760) 0:00:01.054 *** 2025-09-17 01:03:40.388371 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-17 01:03:40.388383 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-17 01:03:40.388394 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-17 01:03:40.388405 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-17 01:03:40.388415 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-17 01:03:40.388426 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-17 01:03:40.388436 | orchestrator | 2025-09-17 01:03:40.388447 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-17 01:03:40.388458 | orchestrator | 2025-09-17 01:03:40.388469 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-17 01:03:40.388479 | orchestrator | Wednesday 17 September 2025 01:00:54 +0000 (0:00:00.642) 0:00:01.696 *** 2025-09-17 01:03:40.388745 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 01:03:40.388761 | orchestrator | 2025-09-17 01:03:40.388773 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-17 01:03:40.388783 | orchestrator | Wednesday 17 September 2025 01:00:56 +0000 (0:00:02.544) 0:00:04.241 *** 2025-09-17 01:03:40.388795 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-17 01:03:40.388806 | orchestrator | 2025-09-17 01:03:40.388818 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-17 01:03:40.388829 | orchestrator | Wednesday 17 September 2025 01:01:00 +0000 (0:00:04.165) 0:00:08.407 *** 2025-09-17 01:03:40.388840 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-17 01:03:40.388851 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-17 01:03:40.388862 | orchestrator | 2025-09-17 01:03:40.388873 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-17 01:03:40.388884 | orchestrator | Wednesday 17 September 2025 01:01:07 +0000 (0:00:07.054) 0:00:15.461 *** 2025-09-17 01:03:40.388895 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 01:03:40.388905 | orchestrator | 2025-09-17 01:03:40.388916 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-17 01:03:40.388959 | orchestrator | Wednesday 17 September 2025 01:01:11 +0000 (0:00:03.600) 0:00:19.062 *** 2025-09-17 01:03:40.388971 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 01:03:40.388997 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-17 01:03:40.389024 | orchestrator | 2025-09-17 01:03:40.389035 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-17 01:03:40.389046 | orchestrator | Wednesday 17 September 2025 01:01:15 +0000 (0:00:04.331) 0:00:23.394 *** 2025-09-17 01:03:40.389056 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 01:03:40.389067 | orchestrator | 2025-09-17 01:03:40.389078 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-17 01:03:40.389088 | orchestrator | Wednesday 17 September 2025 01:01:19 +0000 (0:00:03.811) 0:00:27.205 *** 2025-09-17 01:03:40.389099 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-17 01:03:40.389110 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-17 01:03:40.389120 | orchestrator | 2025-09-17 01:03:40.389131 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-17 01:03:40.389141 | orchestrator | Wednesday 17 September 2025 01:01:27 +0000 (0:00:08.254) 0:00:35.460 *** 2025-09-17 01:03:40.389156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.389219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.389233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.389246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.389270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.389282 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.389326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.389340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.389352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.389369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.389390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.389405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.389418 | orchestrator | 2025-09-17 01:03:40.389459 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-17 01:03:40.389474 | orchestrator | Wednesday 17 September 2025 01:01:31 +0000 (0:00:03.549) 0:00:39.010 *** 2025-09-17 01:03:40.389487 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.389500 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:03:40.389512 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:03:40.389524 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:03:40.389537 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:03:40.389549 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:03:40.389561 | orchestrator | 2025-09-17 01:03:40.389574 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-17 01:03:40.389587 | orchestrator | Wednesday 17 September 2025 01:01:31 +0000 (0:00:00.561) 0:00:39.571 *** 2025-09-17 01:03:40.389599 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.389611 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:03:40.389623 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:03:40.389635 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 01:03:40.389648 | orchestrator | 2025-09-17 01:03:40.389660 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-17 01:03:40.389673 | orchestrator | Wednesday 17 September 2025 01:01:33 +0000 (0:00:01.420) 0:00:40.992 *** 2025-09-17 01:03:40.389685 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-17 01:03:40.389697 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-17 01:03:40.389710 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-17 01:03:40.389722 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-17 01:03:40.389741 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-17 01:03:40.389752 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-17 01:03:40.389762 | orchestrator | 2025-09-17 01:03:40.389773 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-17 01:03:40.389783 | orchestrator | Wednesday 17 September 2025 01:01:35 +0000 (0:00:02.216) 0:00:43.208 *** 2025-09-17 01:03:40.389796 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 01:03:40.389814 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 01:03:40.389827 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 01:03:40.389869 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 01:03:40.389883 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 01:03:40.389900 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 01:03:40.389917 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 01:03:40.389950 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 01:03:40.389994 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 01:03:40.390061 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 01:03:40.390077 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 01:03:40.390094 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 01:03:40.390106 | orchestrator | 2025-09-17 01:03:40.390117 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-17 01:03:40.390128 | orchestrator | Wednesday 17 September 2025 01:01:38 +0000 (0:00:03.313) 0:00:46.522 *** 2025-09-17 01:03:40.390139 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 01:03:40.390151 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 01:03:40.390162 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 01:03:40.390173 | orchestrator | 2025-09-17 01:03:40.390184 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-17 01:03:40.390195 | orchestrator | Wednesday 17 September 2025 01:01:40 +0000 (0:00:01.851) 0:00:48.374 *** 2025-09-17 01:03:40.390205 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-17 01:03:40.390216 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-17 01:03:40.390227 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-17 01:03:40.390238 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 01:03:40.390248 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 01:03:40.390292 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 01:03:40.390305 | orchestrator | 2025-09-17 01:03:40.390316 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-17 01:03:40.390326 | orchestrator | Wednesday 17 September 2025 01:01:43 +0000 (0:00:02.794) 0:00:51.168 *** 2025-09-17 01:03:40.390347 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-17 01:03:40.390358 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-17 01:03:40.390369 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-17 01:03:40.390380 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-17 01:03:40.390390 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-17 01:03:40.390401 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-17 01:03:40.390412 | orchestrator | 2025-09-17 01:03:40.390422 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-17 01:03:40.390433 | orchestrator | Wednesday 17 September 2025 01:01:44 +0000 (0:00:01.134) 0:00:52.302 *** 2025-09-17 01:03:40.390444 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.390454 | orchestrator | 2025-09-17 01:03:40.390465 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-17 01:03:40.390476 | orchestrator | Wednesday 17 September 2025 01:01:44 +0000 (0:00:00.130) 0:00:52.433 *** 2025-09-17 01:03:40.390486 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.390497 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:03:40.390508 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:03:40.390518 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:03:40.390529 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:03:40.390540 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:03:40.390550 | orchestrator | 2025-09-17 01:03:40.390561 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-17 01:03:40.390571 | orchestrator | Wednesday 17 September 2025 01:01:45 +0000 (0:00:00.582) 0:00:53.016 *** 2025-09-17 01:03:40.390584 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 01:03:40.390597 | orchestrator | 2025-09-17 01:03:40.390608 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-17 01:03:40.390618 | orchestrator | Wednesday 17 September 2025 01:01:46 +0000 (0:00:00.990) 0:00:54.006 *** 2025-09-17 01:03:40.390641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.390653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.390695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.390716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.390727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.390744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.390755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.390773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.390817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.390842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.390854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.390870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.390882 | orchestrator | 2025-09-17 01:03:40.390893 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-17 01:03:40.390904 | orchestrator | Wednesday 17 September 2025 01:01:49 +0000 (0:00:03.011) 0:00:57.018 *** 2025-09-17 01:03:40.390916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 01:03:40.390995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 01:03:40.391021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391032 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:03:40.391044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391079 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.391090 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:03:40.391101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 01:03:40.391121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391133 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:03:40.391144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391167 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:03:40.391183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391211 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:03:40.391221 | orchestrator | 2025-09-17 01:03:40.391231 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-17 01:03:40.391241 | orchestrator | Wednesday 17 September 2025 01:01:51 +0000 (0:00:02.229) 0:00:59.248 *** 2025-09-17 01:03:40.391256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 01:03:40.391267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391277 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.391287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 01:03:40.391302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 01:03:40.391334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391344 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:03:40.391354 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:03:40.391364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391384 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:03:40.391398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391425 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:03:40.391441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.391461 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:03:40.391471 | orchestrator | 2025-09-17 01:03:40.391481 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-17 01:03:40.391490 | orchestrator | Wednesday 17 September 2025 01:01:53 +0000 (0:00:01.390) 0:01:00.638 *** 2025-09-17 01:03:40.391500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.391524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.391535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.391551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391562 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391668 | orchestrator | 2025-09-17 01:03:40.391678 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-17 01:03:40.391688 | orchestrator | Wednesday 17 September 2025 01:01:55 +0000 (0:00:02.784) 0:01:03.423 *** 2025-09-17 01:03:40.391697 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-17 01:03:40.391707 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:03:40.391717 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-17 01:03:40.391726 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:03:40.391740 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-17 01:03:40.391750 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-17 01:03:40.391770 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:03:40.391780 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-17 01:03:40.391789 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-17 01:03:40.391799 | orchestrator | 2025-09-17 01:03:40.391808 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-17 01:03:40.391817 | orchestrator | Wednesday 17 September 2025 01:01:57 +0000 (0:00:01.706) 0:01:05.129 *** 2025-09-17 01:03:40.391827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.391844 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.391885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.391899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391910 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391987 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.391997 | orchestrator | 2025-09-17 01:03:40.392007 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-17 01:03:40.392017 | orchestrator | Wednesday 17 September 2025 01:02:06 +0000 (0:00:08.604) 0:01:13.733 *** 2025-09-17 01:03:40.392031 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:03:40.392041 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:03:40.392051 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.392060 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:03:40.392070 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:03:40.392079 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:03:40.392089 | orchestrator | 2025-09-17 01:03:40.392098 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-17 01:03:40.392108 | orchestrator | Wednesday 17 September 2025 01:02:08 +0000 (0:00:01.906) 0:01:15.640 *** 2025-09-17 01:03:40.392124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 01:03:40.392135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.392145 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.392159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 01:03:40.392170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.392180 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:03:40.392194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 01:03:40.392211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.392221 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:03:40.392231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.392241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.392251 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:03:40.392265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.392275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.392285 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:03:40.392306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.392317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 01:03:40.392327 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:03:40.392337 | orchestrator | 2025-09-17 01:03:40.392346 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-17 01:03:40.392356 | orchestrator | Wednesday 17 September 2025 01:02:09 +0000 (0:00:01.052) 0:01:16.692 *** 2025-09-17 01:03:40.392366 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.392375 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:03:40.392385 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:03:40.392394 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:03:40.392404 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:03:40.392413 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:03:40.392423 | orchestrator | 2025-09-17 01:03:40.392433 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-17 01:03:40.392442 | orchestrator | Wednesday 17 September 2025 01:02:09 +0000 (0:00:00.539) 0:01:17.231 *** 2025-09-17 01:03:40.392456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.392467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.392491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 01:03:40.392503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.392513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.392527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.392538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.392559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.392570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.392580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.392590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.392606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 01:03:40.392616 | orchestrator | 2025-09-17 01:03:40.392626 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-17 01:03:40.392636 | orchestrator | Wednesday 17 September 2025 01:02:12 +0000 (0:00:02.566) 0:01:19.797 *** 2025-09-17 01:03:40.392645 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.392662 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:03:40.392672 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:03:40.392681 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:03:40.392691 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:03:40.392700 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:03:40.392709 | orchestrator | 2025-09-17 01:03:40.392719 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-17 01:03:40.392729 | orchestrator | Wednesday 17 September 2025 01:02:12 +0000 (0:00:00.484) 0:01:20.282 *** 2025-09-17 01:03:40.392738 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:03:40.392748 | orchestrator | 2025-09-17 01:03:40.392757 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-17 01:03:40.392767 | orchestrator | Wednesday 17 September 2025 01:02:15 +0000 (0:00:02.449) 0:01:22.731 *** 2025-09-17 01:03:40.392776 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:03:40.392786 | orchestrator | 2025-09-17 01:03:40.392795 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-17 01:03:40.392805 | orchestrator | Wednesday 17 September 2025 01:02:17 +0000 (0:00:02.375) 0:01:25.107 *** 2025-09-17 01:03:40.392814 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:03:40.392824 | orchestrator | 2025-09-17 01:03:40.392833 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 01:03:40.392843 | orchestrator | Wednesday 17 September 2025 01:02:35 +0000 (0:00:18.278) 0:01:43.385 *** 2025-09-17 01:03:40.392853 | orchestrator | 2025-09-17 01:03:40.392866 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 01:03:40.392876 | orchestrator | Wednesday 17 September 2025 01:02:35 +0000 (0:00:00.060) 0:01:43.445 *** 2025-09-17 01:03:40.392886 | orchestrator | 2025-09-17 01:03:40.392895 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 01:03:40.392905 | orchestrator | Wednesday 17 September 2025 01:02:35 +0000 (0:00:00.056) 0:01:43.502 *** 2025-09-17 01:03:40.392915 | orchestrator | 2025-09-17 01:03:40.392924 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 01:03:40.392977 | orchestrator | Wednesday 17 September 2025 01:02:35 +0000 (0:00:00.078) 0:01:43.580 *** 2025-09-17 01:03:40.392987 | orchestrator | 2025-09-17 01:03:40.392996 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 01:03:40.393006 | orchestrator | Wednesday 17 September 2025 01:02:36 +0000 (0:00:00.073) 0:01:43.654 *** 2025-09-17 01:03:40.393015 | orchestrator | 2025-09-17 01:03:40.393025 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 01:03:40.393035 | orchestrator | Wednesday 17 September 2025 01:02:36 +0000 (0:00:00.063) 0:01:43.718 *** 2025-09-17 01:03:40.393044 | orchestrator | 2025-09-17 01:03:40.393054 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-17 01:03:40.393063 | orchestrator | Wednesday 17 September 2025 01:02:36 +0000 (0:00:00.061) 0:01:43.780 *** 2025-09-17 01:03:40.393072 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:03:40.393082 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:03:40.393091 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:03:40.393101 | orchestrator | 2025-09-17 01:03:40.393110 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-17 01:03:40.393119 | orchestrator | Wednesday 17 September 2025 01:02:56 +0000 (0:00:20.635) 0:02:04.415 *** 2025-09-17 01:03:40.393129 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:03:40.393138 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:03:40.393147 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:03:40.393157 | orchestrator | 2025-09-17 01:03:40.393166 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-17 01:03:40.393176 | orchestrator | Wednesday 17 September 2025 01:03:01 +0000 (0:00:05.126) 0:02:09.542 *** 2025-09-17 01:03:40.393185 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:03:40.393195 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:03:40.393211 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:03:40.393220 | orchestrator | 2025-09-17 01:03:40.393230 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-17 01:03:40.393239 | orchestrator | Wednesday 17 September 2025 01:03:32 +0000 (0:00:30.860) 0:02:40.402 *** 2025-09-17 01:03:40.393249 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:03:40.393258 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:03:40.393268 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:03:40.393277 | orchestrator | 2025-09-17 01:03:40.393287 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-17 01:03:40.393297 | orchestrator | Wednesday 17 September 2025 01:03:38 +0000 (0:00:05.784) 0:02:46.186 *** 2025-09-17 01:03:40.393306 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:03:40.393316 | orchestrator | 2025-09-17 01:03:40.393325 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 01:03:40.393335 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 01:03:40.393349 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-17 01:03:40.393359 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-17 01:03:40.393369 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-17 01:03:40.393379 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-17 01:03:40.393389 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-17 01:03:40.393398 | orchestrator | 2025-09-17 01:03:40.393408 | orchestrator | 2025-09-17 01:03:40.393417 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 01:03:40.393427 | orchestrator | Wednesday 17 September 2025 01:03:39 +0000 (0:00:00.529) 0:02:46.715 *** 2025-09-17 01:03:40.393436 | orchestrator | =============================================================================== 2025-09-17 01:03:40.393446 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 30.86s 2025-09-17 01:03:40.393455 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 20.64s 2025-09-17 01:03:40.393465 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.28s 2025-09-17 01:03:40.393474 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 8.60s 2025-09-17 01:03:40.393482 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.25s 2025-09-17 01:03:40.393489 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.05s 2025-09-17 01:03:40.393497 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.78s 2025-09-17 01:03:40.393505 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.13s 2025-09-17 01:03:40.393517 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.33s 2025-09-17 01:03:40.393525 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.17s 2025-09-17 01:03:40.393533 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.81s 2025-09-17 01:03:40.393541 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.60s 2025-09-17 01:03:40.393549 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.55s 2025-09-17 01:03:40.393556 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.31s 2025-09-17 01:03:40.393564 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.01s 2025-09-17 01:03:40.393577 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.79s 2025-09-17 01:03:40.393585 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.78s 2025-09-17 01:03:40.393592 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.57s 2025-09-17 01:03:40.393600 | orchestrator | cinder : include_tasks -------------------------------------------------- 2.54s 2025-09-17 01:03:40.393608 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.45s 2025-09-17 01:03:40.393616 | orchestrator | 2025-09-17 01:03:40 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:40.393624 | orchestrator | 2025-09-17 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:43.442744 | orchestrator | 2025-09-17 01:03:43 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:43.445799 | orchestrator | 2025-09-17 01:03:43 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:03:43.449008 | orchestrator | 2025-09-17 01:03:43 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:43.451612 | orchestrator | 2025-09-17 01:03:43 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:43.451792 | orchestrator | 2025-09-17 01:03:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:46.481427 | orchestrator | 2025-09-17 01:03:46 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:46.482569 | orchestrator | 2025-09-17 01:03:46 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:03:46.484017 | orchestrator | 2025-09-17 01:03:46 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:46.486142 | orchestrator | 2025-09-17 01:03:46 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:46.486177 | orchestrator | 2025-09-17 01:03:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:49.531648 | orchestrator | 2025-09-17 01:03:49 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:49.532767 | orchestrator | 2025-09-17 01:03:49 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:03:49.534576 | orchestrator | 2025-09-17 01:03:49 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:49.535917 | orchestrator | 2025-09-17 01:03:49 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:49.536160 | orchestrator | 2025-09-17 01:03:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:52.578313 | orchestrator | 2025-09-17 01:03:52 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:52.580728 | orchestrator | 2025-09-17 01:03:52 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:03:52.582508 | orchestrator | 2025-09-17 01:03:52 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:52.585391 | orchestrator | 2025-09-17 01:03:52 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:52.585424 | orchestrator | 2025-09-17 01:03:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:55.626679 | orchestrator | 2025-09-17 01:03:55 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state STARTED 2025-09-17 01:03:55.627328 | orchestrator | 2025-09-17 01:03:55 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:03:55.628717 | orchestrator | 2025-09-17 01:03:55 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:55.629916 | orchestrator | 2025-09-17 01:03:55 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:55.629982 | orchestrator | 2025-09-17 01:03:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:03:58.666703 | orchestrator | 2025-09-17 01:03:58 | INFO  | Task e1766712-73b0-47ac-89d5-9eca75aa4ec3 is in state SUCCESS 2025-09-17 01:03:58.667501 | orchestrator | 2025-09-17 01:03:58 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:03:58.669317 | orchestrator | 2025-09-17 01:03:58 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:03:58.670652 | orchestrator | 2025-09-17 01:03:58 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:03:58.670770 | orchestrator | 2025-09-17 01:03:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:01.719115 | orchestrator | 2025-09-17 01:04:01 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:01.721556 | orchestrator | 2025-09-17 01:04:01 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:01.724383 | orchestrator | 2025-09-17 01:04:01 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:01.724597 | orchestrator | 2025-09-17 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:04.767462 | orchestrator | 2025-09-17 01:04:04 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:04.768894 | orchestrator | 2025-09-17 01:04:04 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:04.770152 | orchestrator | 2025-09-17 01:04:04 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:04.770181 | orchestrator | 2025-09-17 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:07.817375 | orchestrator | 2025-09-17 01:04:07 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:07.818779 | orchestrator | 2025-09-17 01:04:07 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:07.820365 | orchestrator | 2025-09-17 01:04:07 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:07.820568 | orchestrator | 2025-09-17 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:10.862407 | orchestrator | 2025-09-17 01:04:10 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:10.862780 | orchestrator | 2025-09-17 01:04:10 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:10.866759 | orchestrator | 2025-09-17 01:04:10 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:10.866799 | orchestrator | 2025-09-17 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:13.919225 | orchestrator | 2025-09-17 01:04:13 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:13.920196 | orchestrator | 2025-09-17 01:04:13 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:13.921808 | orchestrator | 2025-09-17 01:04:13 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:13.921833 | orchestrator | 2025-09-17 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:16.964392 | orchestrator | 2025-09-17 01:04:16 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:16.965956 | orchestrator | 2025-09-17 01:04:16 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:16.968687 | orchestrator | 2025-09-17 01:04:16 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:16.968712 | orchestrator | 2025-09-17 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:20.017435 | orchestrator | 2025-09-17 01:04:20 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:20.020223 | orchestrator | 2025-09-17 01:04:20 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:20.022564 | orchestrator | 2025-09-17 01:04:20 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:20.023718 | orchestrator | 2025-09-17 01:04:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:23.082178 | orchestrator | 2025-09-17 01:04:23 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:23.082438 | orchestrator | 2025-09-17 01:04:23 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:23.083673 | orchestrator | 2025-09-17 01:04:23 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:23.083700 | orchestrator | 2025-09-17 01:04:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:26.116792 | orchestrator | 2025-09-17 01:04:26 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:26.116897 | orchestrator | 2025-09-17 01:04:26 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:26.117480 | orchestrator | 2025-09-17 01:04:26 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:26.117507 | orchestrator | 2025-09-17 01:04:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:29.162207 | orchestrator | 2025-09-17 01:04:29 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:29.162599 | orchestrator | 2025-09-17 01:04:29 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:29.164200 | orchestrator | 2025-09-17 01:04:29 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:29.164223 | orchestrator | 2025-09-17 01:04:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:32.207528 | orchestrator | 2025-09-17 01:04:32 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:32.210003 | orchestrator | 2025-09-17 01:04:32 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:32.211321 | orchestrator | 2025-09-17 01:04:32 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:32.211350 | orchestrator | 2025-09-17 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:35.248703 | orchestrator | 2025-09-17 01:04:35 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:35.249803 | orchestrator | 2025-09-17 01:04:35 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:35.251231 | orchestrator | 2025-09-17 01:04:35 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:35.251581 | orchestrator | 2025-09-17 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:38.275599 | orchestrator | 2025-09-17 01:04:38 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:38.276281 | orchestrator | 2025-09-17 01:04:38 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:38.277714 | orchestrator | 2025-09-17 01:04:38 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:38.277741 | orchestrator | 2025-09-17 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:41.324825 | orchestrator | 2025-09-17 01:04:41 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:41.326248 | orchestrator | 2025-09-17 01:04:41 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:41.327833 | orchestrator | 2025-09-17 01:04:41 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:41.327871 | orchestrator | 2025-09-17 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:44.375003 | orchestrator | 2025-09-17 01:04:44 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:44.376116 | orchestrator | 2025-09-17 01:04:44 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:44.377567 | orchestrator | 2025-09-17 01:04:44 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:44.377592 | orchestrator | 2025-09-17 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:47.426714 | orchestrator | 2025-09-17 01:04:47 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:47.427905 | orchestrator | 2025-09-17 01:04:47 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:47.429582 | orchestrator | 2025-09-17 01:04:47 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:47.429602 | orchestrator | 2025-09-17 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:50.477417 | orchestrator | 2025-09-17 01:04:50 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:50.477543 | orchestrator | 2025-09-17 01:04:50 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:50.478302 | orchestrator | 2025-09-17 01:04:50 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:50.478329 | orchestrator | 2025-09-17 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:53.517706 | orchestrator | 2025-09-17 01:04:53 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:53.518579 | orchestrator | 2025-09-17 01:04:53 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:53.520501 | orchestrator | 2025-09-17 01:04:53 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:53.520522 | orchestrator | 2025-09-17 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:56.571497 | orchestrator | 2025-09-17 01:04:56 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:56.572454 | orchestrator | 2025-09-17 01:04:56 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:56.574396 | orchestrator | 2025-09-17 01:04:56 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:56.574419 | orchestrator | 2025-09-17 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:04:59.617539 | orchestrator | 2025-09-17 01:04:59 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:04:59.620099 | orchestrator | 2025-09-17 01:04:59 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:04:59.621118 | orchestrator | 2025-09-17 01:04:59 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:04:59.621173 | orchestrator | 2025-09-17 01:04:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:02.670546 | orchestrator | 2025-09-17 01:05:02 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:02.673071 | orchestrator | 2025-09-17 01:05:02 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state STARTED 2025-09-17 01:05:02.676039 | orchestrator | 2025-09-17 01:05:02 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:02.676160 | orchestrator | 2025-09-17 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:05.726711 | orchestrator | 2025-09-17 01:05:05 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:05.728003 | orchestrator | 2025-09-17 01:05:05 | INFO  | Task d4adb89c-4eb8-4ed3-84d2-0924a8ada6b3 is in state SUCCESS 2025-09-17 01:05:05.729400 | orchestrator | 2025-09-17 01:05:05 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:05.729432 | orchestrator | 2025-09-17 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:08.779866 | orchestrator | 2025-09-17 01:05:08 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:08.780303 | orchestrator | 2025-09-17 01:05:08 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:08.780530 | orchestrator | 2025-09-17 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:11.830506 | orchestrator | 2025-09-17 01:05:11 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:11.831715 | orchestrator | 2025-09-17 01:05:11 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:11.831757 | orchestrator | 2025-09-17 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:14.884655 | orchestrator | 2025-09-17 01:05:14 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:14.886151 | orchestrator | 2025-09-17 01:05:14 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:14.886180 | orchestrator | 2025-09-17 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:17.928141 | orchestrator | 2025-09-17 01:05:17 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:17.931614 | orchestrator | 2025-09-17 01:05:17 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:17.931651 | orchestrator | 2025-09-17 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:20.971463 | orchestrator | 2025-09-17 01:05:20 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:20.972427 | orchestrator | 2025-09-17 01:05:20 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:20.972458 | orchestrator | 2025-09-17 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:24.037970 | orchestrator | 2025-09-17 01:05:24 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:24.041508 | orchestrator | 2025-09-17 01:05:24 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:24.041538 | orchestrator | 2025-09-17 01:05:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:27.093425 | orchestrator | 2025-09-17 01:05:27 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:27.093784 | orchestrator | 2025-09-17 01:05:27 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:27.094075 | orchestrator | 2025-09-17 01:05:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:30.149915 | orchestrator | 2025-09-17 01:05:30 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:30.151538 | orchestrator | 2025-09-17 01:05:30 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:30.151767 | orchestrator | 2025-09-17 01:05:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:33.205967 | orchestrator | 2025-09-17 01:05:33 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:33.207132 | orchestrator | 2025-09-17 01:05:33 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:33.207165 | orchestrator | 2025-09-17 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:36.248822 | orchestrator | 2025-09-17 01:05:36 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:36.250130 | orchestrator | 2025-09-17 01:05:36 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:36.250178 | orchestrator | 2025-09-17 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:39.292957 | orchestrator | 2025-09-17 01:05:39 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:39.293741 | orchestrator | 2025-09-17 01:05:39 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:39.294155 | orchestrator | 2025-09-17 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:42.333881 | orchestrator | 2025-09-17 01:05:42 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:42.335515 | orchestrator | 2025-09-17 01:05:42 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:42.335704 | orchestrator | 2025-09-17 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:45.373968 | orchestrator | 2025-09-17 01:05:45 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:45.382777 | orchestrator | 2025-09-17 01:05:45 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:45.382828 | orchestrator | 2025-09-17 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:48.423626 | orchestrator | 2025-09-17 01:05:48 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:48.425050 | orchestrator | 2025-09-17 01:05:48 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:48.425349 | orchestrator | 2025-09-17 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:51.467830 | orchestrator | 2025-09-17 01:05:51 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:51.467987 | orchestrator | 2025-09-17 01:05:51 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:51.468004 | orchestrator | 2025-09-17 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:54.511158 | orchestrator | 2025-09-17 01:05:54 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state STARTED 2025-09-17 01:05:54.511723 | orchestrator | 2025-09-17 01:05:54 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:54.511757 | orchestrator | 2025-09-17 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:05:57.555350 | orchestrator | 2025-09-17 01:05:57 | INFO  | Task d80824b3-ce91-4c15-909b-3b8bc3c93074 is in state SUCCESS 2025-09-17 01:05:57.555830 | orchestrator | 2025-09-17 01:05:57.555875 | orchestrator | 2025-09-17 01:05:57.555895 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 01:05:57.555913 | orchestrator | 2025-09-17 01:05:57.555956 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 01:05:57.555971 | orchestrator | Wednesday 17 September 2025 01:02:59 +0000 (0:00:00.261) 0:00:00.261 *** 2025-09-17 01:05:57.555982 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:05:57.555994 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:05:57.556005 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:05:57.556015 | orchestrator | 2025-09-17 01:05:57.556026 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 01:05:57.556037 | orchestrator | Wednesday 17 September 2025 01:02:59 +0000 (0:00:00.293) 0:00:00.554 *** 2025-09-17 01:05:57.556048 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-17 01:05:57.556059 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-17 01:05:57.556070 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-17 01:05:57.556080 | orchestrator | 2025-09-17 01:05:57.556092 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-17 01:05:57.556103 | orchestrator | 2025-09-17 01:05:57.556113 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-17 01:05:57.556124 | orchestrator | Wednesday 17 September 2025 01:03:00 +0000 (0:00:00.419) 0:00:00.974 *** 2025-09-17 01:05:57.556135 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:05:57.556146 | orchestrator | 2025-09-17 01:05:57.556157 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-17 01:05:57.556230 | orchestrator | Wednesday 17 September 2025 01:03:00 +0000 (0:00:00.540) 0:00:01.515 *** 2025-09-17 01:05:57.556243 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-17 01:05:57.556253 | orchestrator | 2025-09-17 01:05:57.556441 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-17 01:05:57.556459 | orchestrator | Wednesday 17 September 2025 01:03:04 +0000 (0:00:03.784) 0:00:05.299 *** 2025-09-17 01:05:57.556471 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-17 01:05:57.556484 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-17 01:05:57.556496 | orchestrator | 2025-09-17 01:05:57.556508 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-17 01:05:57.556522 | orchestrator | Wednesday 17 September 2025 01:03:11 +0000 (0:00:06.933) 0:00:12.233 *** 2025-09-17 01:05:57.556534 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 01:05:57.556547 | orchestrator | 2025-09-17 01:05:57.556567 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-17 01:05:57.556585 | orchestrator | Wednesday 17 September 2025 01:03:15 +0000 (0:00:03.540) 0:00:15.773 *** 2025-09-17 01:05:57.556604 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 01:05:57.556622 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-17 01:05:57.556640 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-17 01:05:57.556656 | orchestrator | 2025-09-17 01:05:57.556673 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-17 01:05:57.556692 | orchestrator | Wednesday 17 September 2025 01:03:24 +0000 (0:00:09.243) 0:00:25.017 *** 2025-09-17 01:05:57.556708 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 01:05:57.556725 | orchestrator | 2025-09-17 01:05:57.556743 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-17 01:05:57.556761 | orchestrator | Wednesday 17 September 2025 01:03:27 +0000 (0:00:03.334) 0:00:28.352 *** 2025-09-17 01:05:57.556781 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-17 01:05:57.556817 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-17 01:05:57.556838 | orchestrator | 2025-09-17 01:05:57.556857 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-17 01:05:57.556893 | orchestrator | Wednesday 17 September 2025 01:03:34 +0000 (0:00:06.993) 0:00:35.345 *** 2025-09-17 01:05:57.556906 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-17 01:05:57.556916 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-17 01:05:57.556953 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-17 01:05:57.556964 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-17 01:05:57.556975 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-17 01:05:57.556985 | orchestrator | 2025-09-17 01:05:57.556996 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-17 01:05:57.557007 | orchestrator | Wednesday 17 September 2025 01:03:52 +0000 (0:00:17.531) 0:00:52.876 *** 2025-09-17 01:05:57.557017 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:05:57.557029 | orchestrator | 2025-09-17 01:05:57.557039 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-17 01:05:57.557050 | orchestrator | Wednesday 17 September 2025 01:03:52 +0000 (0:00:00.609) 0:00:53.485 *** 2025-09-17 01:05:57.558217 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request."} 2025-09-17 01:05:57.558523 | orchestrator | 2025-09-17 01:05:57.558546 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 01:05:57.558560 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-17 01:05:57.558573 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:05:57.558584 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:05:57.558595 | orchestrator | 2025-09-17 01:05:57.558606 | orchestrator | 2025-09-17 01:05:57.558617 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 01:05:57.558628 | orchestrator | Wednesday 17 September 2025 01:03:56 +0000 (0:00:03.569) 0:00:57.055 *** 2025-09-17 01:05:57.558639 | orchestrator | =============================================================================== 2025-09-17 01:05:57.558649 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.53s 2025-09-17 01:05:57.558660 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.24s 2025-09-17 01:05:57.558671 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.99s 2025-09-17 01:05:57.558682 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.93s 2025-09-17 01:05:57.558693 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.78s 2025-09-17 01:05:57.558703 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.57s 2025-09-17 01:05:57.558714 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.54s 2025-09-17 01:05:57.558725 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.33s 2025-09-17 01:05:57.558736 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.61s 2025-09-17 01:05:57.558777 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.54s 2025-09-17 01:05:57.558789 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-09-17 01:05:57.558800 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-09-17 01:05:57.558811 | orchestrator | 2025-09-17 01:05:57.558825 | orchestrator | 2025-09-17 01:05:57.558844 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 01:05:57.558862 | orchestrator | 2025-09-17 01:05:57.558881 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 01:05:57.558896 | orchestrator | Wednesday 17 September 2025 01:02:11 +0000 (0:00:00.185) 0:00:00.185 *** 2025-09-17 01:05:57.558907 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:05:57.558919 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:05:57.558959 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:05:57.558971 | orchestrator | 2025-09-17 01:05:57.558982 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 01:05:57.558993 | orchestrator | Wednesday 17 September 2025 01:02:11 +0000 (0:00:00.280) 0:00:00.466 *** 2025-09-17 01:05:57.559004 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-17 01:05:57.559015 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-17 01:05:57.559026 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-17 01:05:57.559037 | orchestrator | 2025-09-17 01:05:57.559048 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-17 01:05:57.559059 | orchestrator | 2025-09-17 01:05:57.559069 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-17 01:05:57.559080 | orchestrator | Wednesday 17 September 2025 01:02:11 +0000 (0:00:00.509) 0:00:00.975 *** 2025-09-17 01:05:57.559091 | orchestrator | 2025-09-17 01:05:57.559115 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-09-17 01:05:57.559127 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:05:57.559138 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:05:57.559148 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:05:57.559159 | orchestrator | 2025-09-17 01:05:57.559170 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 01:05:57.559181 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:05:57.559192 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:05:57.559203 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:05:57.559213 | orchestrator | 2025-09-17 01:05:57.559224 | orchestrator | 2025-09-17 01:05:57.559235 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 01:05:57.559245 | orchestrator | Wednesday 17 September 2025 01:05:02 +0000 (0:02:50.798) 0:02:51.774 *** 2025-09-17 01:05:57.559256 | orchestrator | =============================================================================== 2025-09-17 01:05:57.559267 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 170.80s 2025-09-17 01:05:57.559277 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-09-17 01:05:57.559288 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-09-17 01:05:57.559299 | orchestrator | 2025-09-17 01:05:57.559340 | orchestrator | 2025-09-17 01:05:57.559351 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 01:05:57.559362 | orchestrator | 2025-09-17 01:05:57.559373 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 01:05:57.559384 | orchestrator | Wednesday 17 September 2025 01:03:43 +0000 (0:00:00.253) 0:00:00.253 *** 2025-09-17 01:05:57.559394 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:05:57.559405 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:05:57.559425 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:05:57.559436 | orchestrator | 2025-09-17 01:05:57.559447 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 01:05:57.559458 | orchestrator | Wednesday 17 September 2025 01:03:43 +0000 (0:00:00.291) 0:00:00.545 *** 2025-09-17 01:05:57.559469 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-17 01:05:57.559480 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-17 01:05:57.559490 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-17 01:05:57.559501 | orchestrator | 2025-09-17 01:05:57.559512 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-17 01:05:57.559522 | orchestrator | 2025-09-17 01:05:57.559533 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-17 01:05:57.559544 | orchestrator | Wednesday 17 September 2025 01:03:43 +0000 (0:00:00.375) 0:00:00.920 *** 2025-09-17 01:05:57.559555 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:05:57.559566 | orchestrator | 2025-09-17 01:05:57.559577 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-17 01:05:57.559588 | orchestrator | Wednesday 17 September 2025 01:03:44 +0000 (0:00:00.513) 0:00:01.434 *** 2025-09-17 01:05:57.559601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.559614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.559632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.559644 | orchestrator | 2025-09-17 01:05:57.559655 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-17 01:05:57.559665 | orchestrator | Wednesday 17 September 2025 01:03:45 +0000 (0:00:00.987) 0:00:02.422 *** 2025-09-17 01:05:57.559676 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-17 01:05:57.559687 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-17 01:05:57.559699 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 01:05:57.559710 | orchestrator | 2025-09-17 01:05:57.559721 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-17 01:05:57.559746 | orchestrator | Wednesday 17 September 2025 01:03:46 +0000 (0:00:00.764) 0:00:03.187 *** 2025-09-17 01:05:57.559757 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:05:57.559768 | orchestrator | 2025-09-17 01:05:57.559778 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-17 01:05:57.559789 | orchestrator | Wednesday 17 September 2025 01:03:46 +0000 (0:00:00.562) 0:00:03.749 *** 2025-09-17 01:05:57.559813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.559825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.559837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.559848 | orchestrator | 2025-09-17 01:05:57.559859 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-17 01:05:57.559869 | orchestrator | Wednesday 17 September 2025 01:03:47 +0000 (0:00:01.295) 0:00:05.045 *** 2025-09-17 01:05:57.559881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 01:05:57.559897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 01:05:57.559916 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:05:57.559954 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:05:57.559974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 01:05:57.559986 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:05:57.559996 | orchestrator | 2025-09-17 01:05:57.560007 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-17 01:05:57.560018 | orchestrator | Wednesday 17 September 2025 01:03:48 +0000 (0:00:00.337) 0:00:05.382 *** 2025-09-17 01:05:57.560030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 01:05:57.560041 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:05:57.560052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 01:05:57.560064 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:05:57.560075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 01:05:57.560086 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:05:57.560097 | orchestrator | 2025-09-17 01:05:57.560108 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-17 01:05:57.560119 | orchestrator | Wednesday 17 September 2025 01:03:49 +0000 (0:00:00.805) 0:00:06.187 *** 2025-09-17 01:05:57.560135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.560154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.560172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.560184 | orchestrator | 2025-09-17 01:05:57.560195 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-17 01:05:57.560205 | orchestrator | Wednesday 17 September 2025 01:03:50 +0000 (0:00:01.261) 0:00:07.449 *** 2025-09-17 01:05:57.560216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.560228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.560239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.560256 | orchestrator | 2025-09-17 01:05:57.560268 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-17 01:05:57.560279 | orchestrator | Wednesday 17 September 2025 01:03:51 +0000 (0:00:01.298) 0:00:08.747 *** 2025-09-17 01:05:57.560289 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:05:57.560300 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:05:57.560311 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:05:57.560321 | orchestrator | 2025-09-17 01:05:57.560337 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-17 01:05:57.560348 | orchestrator | Wednesday 17 September 2025 01:03:52 +0000 (0:00:00.448) 0:00:09.196 *** 2025-09-17 01:05:57.560359 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-17 01:05:57.560370 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-17 01:05:57.560381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-17 01:05:57.560391 | orchestrator | 2025-09-17 01:05:57.560402 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-17 01:05:57.560413 | orchestrator | Wednesday 17 September 2025 01:03:53 +0000 (0:00:01.236) 0:00:10.432 *** 2025-09-17 01:05:57.560423 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-17 01:05:57.560434 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-17 01:05:57.560446 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-17 01:05:57.560456 | orchestrator | 2025-09-17 01:05:57.560467 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-17 01:05:57.560478 | orchestrator | Wednesday 17 September 2025 01:03:54 +0000 (0:00:01.331) 0:00:11.763 *** 2025-09-17 01:05:57.560495 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 01:05:57.560506 | orchestrator | 2025-09-17 01:05:57.560518 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-17 01:05:57.560528 | orchestrator | Wednesday 17 September 2025 01:03:55 +0000 (0:00:00.715) 0:00:12.479 *** 2025-09-17 01:05:57.560539 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-17 01:05:57.560550 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-17 01:05:57.560560 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:05:57.560571 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:05:57.560582 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:05:57.560593 | orchestrator | 2025-09-17 01:05:57.560603 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-17 01:05:57.560614 | orchestrator | Wednesday 17 September 2025 01:03:56 +0000 (0:00:00.726) 0:00:13.205 *** 2025-09-17 01:05:57.560625 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:05:57.560635 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:05:57.560646 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:05:57.560657 | orchestrator | 2025-09-17 01:05:57.560668 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-17 01:05:57.560678 | orchestrator | Wednesday 17 September 2025 01:03:56 +0000 (0:00:00.463) 0:00:13.668 *** 2025-09-17 01:05:57.560690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1062328, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9786637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1062328, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9786637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1062328, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9786637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1062457, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0018113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1062457, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0018113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1062457, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0018113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1062338, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9815016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1062338, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9815016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1062338, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9815016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1062458, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0166643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1062458, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0166643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1062458, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0166643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1062348, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.984576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1062348, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.984576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1062348, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.984576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1062363, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0002937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1062363, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0002937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1062363, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0002937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1062327, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9747605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.560998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1062327, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9747605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1062327, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9747605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1062334, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9786637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1062334, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9786637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1062334, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9786637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1062340, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9816637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1062340, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9816637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1062340, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9816637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1062353, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9857895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1062353, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9857895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1062353, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9857895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1062456, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0015717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1062456, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0015717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1062456, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0015717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1062335, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9806638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1062335, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9806638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1062335, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9806638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1062362, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9873216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1062362, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9873216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1062362, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9873216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1062350, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.984861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1062350, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.984861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1062350, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.984861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1062345, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9839401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1062345, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9839401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1062345, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9839401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1062343, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9826639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1062343, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9826639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1062343, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9826639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1062356, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9870465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1062356, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9870465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1062356, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9870465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1062341, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9824448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1062341, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9824448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1062341, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068075.9824448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1062455, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0011067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1062455, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0011067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1062455, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0011067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1062860, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.081053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1062860, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.081053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1062860, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.081053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1062625, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0319016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1062625, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0319016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1062625, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0319016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1062556, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0204039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1062556, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0204039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1062556, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0204039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1062693, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0427072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1062693, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0427072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1062693, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0427072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1062543, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0176644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1062543, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0176644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1062543, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0176644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1062790, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0646822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1062790, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0646822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.561995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1062790, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0646822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1062730, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.061665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1062730, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.061665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1062730, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.061665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1062794, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0646822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1062794, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0646822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1062794, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0646822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1062857, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0796654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1062857, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0796654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1062857, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0796654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1062788, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.063872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1062788, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.063872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1062788, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.063872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1062691, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0406647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1062691, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0406647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1062691, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0406647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1062572, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0284765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1062572, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0284765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1062572, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0284765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1062646, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0323074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1062646, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0323074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1062646, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0323074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1062561, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0216644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1062561, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0216644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1062561, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0216644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1062692, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0416646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1062692, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0416646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1062692, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0416646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1062834, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0776653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1062834, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0776653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1062834, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0776653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1062797, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0725226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1062797, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0725226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1062797, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0725226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1062546, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.018918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1062546, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.018918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1062546, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.018918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1062551, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0196319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1062551, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0196319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1062551, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0196319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1062786, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.062665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1062786, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.062665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1062786, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.062665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1062795, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0653377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1062795, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0653377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1062795, 'dev': 159, 'nlink': 1, 'atime': 1758067332.0, 'mtime': 1758067332.0, 'ctime': 1758068076.0653377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 01:05:57.562630 | orchestrator | 2025-09-17 01:05:57.562640 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-17 01:05:57.562650 | orchestrator | Wednesday 17 September 2025 01:04:33 +0000 (0:00:37.278) 0:00:50.947 *** 2025-09-17 01:05:57.562660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.562676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.562690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 01:05:57.562700 | orchestrator | 2025-09-17 01:05:57.562710 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-17 01:05:57.562720 | orchestrator | Wednesday 17 September 2025 01:04:34 +0000 (0:00:01.114) 0:00:52.061 *** 2025-09-17 01:05:57.562730 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:05:57.562739 | orchestrator | 2025-09-17 01:05:57.562749 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-17 01:05:57.562758 | orchestrator | Wednesday 17 September 2025 01:04:37 +0000 (0:00:02.313) 0:00:54.375 *** 2025-09-17 01:05:57.562768 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:05:57.562778 | orchestrator | 2025-09-17 01:05:57.562788 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-17 01:05:57.562797 | orchestrator | Wednesday 17 September 2025 01:04:39 +0000 (0:00:02.293) 0:00:56.669 *** 2025-09-17 01:05:57.562807 | orchestrator | 2025-09-17 01:05:57.562816 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-17 01:05:57.562830 | orchestrator | Wednesday 17 September 2025 01:04:39 +0000 (0:00:00.059) 0:00:56.728 *** 2025-09-17 01:05:57.562841 | orchestrator | 2025-09-17 01:05:57.562850 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-17 01:05:57.562860 | orchestrator | Wednesday 17 September 2025 01:04:39 +0000 (0:00:00.058) 0:00:56.787 *** 2025-09-17 01:05:57.562869 | orchestrator | 2025-09-17 01:05:57.562879 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-17 01:05:57.562889 | orchestrator | Wednesday 17 September 2025 01:04:39 +0000 (0:00:00.185) 0:00:56.972 *** 2025-09-17 01:05:57.562898 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:05:57.562908 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:05:57.562918 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:05:57.562944 | orchestrator | 2025-09-17 01:05:57.562954 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-17 01:05:57.562964 | orchestrator | Wednesday 17 September 2025 01:04:41 +0000 (0:00:01.855) 0:00:58.828 *** 2025-09-17 01:05:57.562973 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:05:57.562983 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:05:57.562992 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-17 01:05:57.563002 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-17 01:05:57.563018 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-17 01:05:57.563028 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:05:57.563037 | orchestrator | 2025-09-17 01:05:57.563047 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-17 01:05:57.563056 | orchestrator | Wednesday 17 September 2025 01:05:20 +0000 (0:00:38.678) 0:01:37.506 *** 2025-09-17 01:05:57.563066 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:05:57.563075 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:05:57.563085 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:05:57.563094 | orchestrator | 2025-09-17 01:05:57.563104 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-17 01:05:57.563113 | orchestrator | Wednesday 17 September 2025 01:05:48 +0000 (0:00:28.592) 0:02:06.099 *** 2025-09-17 01:05:57.563123 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:05:57.563133 | orchestrator | 2025-09-17 01:05:57.563142 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-17 01:05:57.563152 | orchestrator | Wednesday 17 September 2025 01:05:51 +0000 (0:00:02.452) 0:02:08.551 *** 2025-09-17 01:05:57.563161 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:05:57.563171 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:05:57.563180 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:05:57.563190 | orchestrator | 2025-09-17 01:05:57.563200 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-17 01:05:57.563209 | orchestrator | Wednesday 17 September 2025 01:05:52 +0000 (0:00:00.840) 0:02:09.392 *** 2025-09-17 01:05:57.563220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-17 01:05:57.563233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-17 01:05:57.563243 | orchestrator | 2025-09-17 01:05:57.563253 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-17 01:05:57.563263 | orchestrator | Wednesday 17 September 2025 01:05:54 +0000 (0:00:02.544) 0:02:11.936 *** 2025-09-17 01:05:57.563272 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:05:57.563282 | orchestrator | 2025-09-17 01:05:57.563292 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 01:05:57.563306 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-17 01:05:57.563316 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-17 01:05:57.563326 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-17 01:05:57.563335 | orchestrator | 2025-09-17 01:05:57.563345 | orchestrator | 2025-09-17 01:05:57.563355 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 01:05:57.563364 | orchestrator | Wednesday 17 September 2025 01:05:55 +0000 (0:00:00.371) 0:02:12.308 *** 2025-09-17 01:05:57.563374 | orchestrator | =============================================================================== 2025-09-17 01:05:57.563383 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.68s 2025-09-17 01:05:57.563393 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.28s 2025-09-17 01:05:57.563403 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.59s 2025-09-17 01:05:57.563417 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.54s 2025-09-17 01:05:57.563427 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.45s 2025-09-17 01:05:57.563441 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.31s 2025-09-17 01:05:57.563457 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.29s 2025-09-17 01:05:57.563473 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.86s 2025-09-17 01:05:57.563489 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.33s 2025-09-17 01:05:57.563506 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.30s 2025-09-17 01:05:57.563523 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.30s 2025-09-17 01:05:57.563539 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.26s 2025-09-17 01:05:57.563550 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2025-09-17 01:05:57.563559 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.11s 2025-09-17 01:05:57.563569 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.99s 2025-09-17 01:05:57.563578 | orchestrator | grafana : Remove old grafana docker volume ------------------------------ 0.84s 2025-09-17 01:05:57.563587 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.81s 2025-09-17 01:05:57.563597 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.76s 2025-09-17 01:05:57.563606 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2025-09-17 01:05:57.563615 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.72s 2025-09-17 01:05:57.563625 | orchestrator | 2025-09-17 01:05:57 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:05:57.563635 | orchestrator | 2025-09-17 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:00.604751 | orchestrator | 2025-09-17 01:06:00 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:00.604861 | orchestrator | 2025-09-17 01:06:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:03.641850 | orchestrator | 2025-09-17 01:06:03 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:03.642194 | orchestrator | 2025-09-17 01:06:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:06.682589 | orchestrator | 2025-09-17 01:06:06 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:06.682692 | orchestrator | 2025-09-17 01:06:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:09.717251 | orchestrator | 2025-09-17 01:06:09 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:09.717364 | orchestrator | 2025-09-17 01:06:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:12.749472 | orchestrator | 2025-09-17 01:06:12 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:12.749572 | orchestrator | 2025-09-17 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:15.784077 | orchestrator | 2025-09-17 01:06:15 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:15.784202 | orchestrator | 2025-09-17 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:18.835631 | orchestrator | 2025-09-17 01:06:18 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:18.835735 | orchestrator | 2025-09-17 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:21.880551 | orchestrator | 2025-09-17 01:06:21 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:21.880667 | orchestrator | 2025-09-17 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:24.918446 | orchestrator | 2025-09-17 01:06:24 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:24.918549 | orchestrator | 2025-09-17 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:27.965781 | orchestrator | 2025-09-17 01:06:27 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:27.965888 | orchestrator | 2025-09-17 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:31.046193 | orchestrator | 2025-09-17 01:06:31 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:31.046304 | orchestrator | 2025-09-17 01:06:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:34.089455 | orchestrator | 2025-09-17 01:06:34 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:34.089569 | orchestrator | 2025-09-17 01:06:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:37.136573 | orchestrator | 2025-09-17 01:06:37 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:37.136676 | orchestrator | 2025-09-17 01:06:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:40.177124 | orchestrator | 2025-09-17 01:06:40 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:40.177230 | orchestrator | 2025-09-17 01:06:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:43.210679 | orchestrator | 2025-09-17 01:06:43 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:43.210781 | orchestrator | 2025-09-17 01:06:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:46.254869 | orchestrator | 2025-09-17 01:06:46 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:46.255022 | orchestrator | 2025-09-17 01:06:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:49.292351 | orchestrator | 2025-09-17 01:06:49 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:49.292470 | orchestrator | 2025-09-17 01:06:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:52.334317 | orchestrator | 2025-09-17 01:06:52 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:52.334429 | orchestrator | 2025-09-17 01:06:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:55.381015 | orchestrator | 2025-09-17 01:06:55 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:55.381096 | orchestrator | 2025-09-17 01:06:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:06:58.427707 | orchestrator | 2025-09-17 01:06:58 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:06:58.427819 | orchestrator | 2025-09-17 01:06:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:01.464431 | orchestrator | 2025-09-17 01:07:01 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:01.464540 | orchestrator | 2025-09-17 01:07:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:04.516852 | orchestrator | 2025-09-17 01:07:04 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:04.517010 | orchestrator | 2025-09-17 01:07:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:07.564703 | orchestrator | 2025-09-17 01:07:07 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:07.564808 | orchestrator | 2025-09-17 01:07:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:10.609749 | orchestrator | 2025-09-17 01:07:10 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:10.609871 | orchestrator | 2025-09-17 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:13.658295 | orchestrator | 2025-09-17 01:07:13 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:13.658396 | orchestrator | 2025-09-17 01:07:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:16.696190 | orchestrator | 2025-09-17 01:07:16 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:16.696292 | orchestrator | 2025-09-17 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:19.734486 | orchestrator | 2025-09-17 01:07:19 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:19.734581 | orchestrator | 2025-09-17 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:22.776848 | orchestrator | 2025-09-17 01:07:22 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:22.776996 | orchestrator | 2025-09-17 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:25.822282 | orchestrator | 2025-09-17 01:07:25 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:25.822385 | orchestrator | 2025-09-17 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:28.881838 | orchestrator | 2025-09-17 01:07:28 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:28.881985 | orchestrator | 2025-09-17 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:31.926795 | orchestrator | 2025-09-17 01:07:31 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:31.927015 | orchestrator | 2025-09-17 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:34.982376 | orchestrator | 2025-09-17 01:07:34 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:34.982481 | orchestrator | 2025-09-17 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:38.023916 | orchestrator | 2025-09-17 01:07:38 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:38.024083 | orchestrator | 2025-09-17 01:07:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:41.068542 | orchestrator | 2025-09-17 01:07:41 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:41.068650 | orchestrator | 2025-09-17 01:07:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:44.108037 | orchestrator | 2025-09-17 01:07:44 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:44.108163 | orchestrator | 2025-09-17 01:07:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:47.157189 | orchestrator | 2025-09-17 01:07:47 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:47.157292 | orchestrator | 2025-09-17 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:50.198174 | orchestrator | 2025-09-17 01:07:50 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:50.198280 | orchestrator | 2025-09-17 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:53.238088 | orchestrator | 2025-09-17 01:07:53 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:53.238189 | orchestrator | 2025-09-17 01:07:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:56.281633 | orchestrator | 2025-09-17 01:07:56 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:56.281741 | orchestrator | 2025-09-17 01:07:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:07:59.323289 | orchestrator | 2025-09-17 01:07:59 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:07:59.323387 | orchestrator | 2025-09-17 01:07:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:02.370235 | orchestrator | 2025-09-17 01:08:02 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:02.370343 | orchestrator | 2025-09-17 01:08:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:05.412027 | orchestrator | 2025-09-17 01:08:05 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:05.412126 | orchestrator | 2025-09-17 01:08:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:08.461459 | orchestrator | 2025-09-17 01:08:08 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:08.461571 | orchestrator | 2025-09-17 01:08:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:11.503080 | orchestrator | 2025-09-17 01:08:11 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:11.503182 | orchestrator | 2025-09-17 01:08:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:14.543482 | orchestrator | 2025-09-17 01:08:14 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:14.543599 | orchestrator | 2025-09-17 01:08:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:17.586900 | orchestrator | 2025-09-17 01:08:17 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:17.587036 | orchestrator | 2025-09-17 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:20.630974 | orchestrator | 2025-09-17 01:08:20 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:20.631078 | orchestrator | 2025-09-17 01:08:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:23.677713 | orchestrator | 2025-09-17 01:08:23 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:23.677820 | orchestrator | 2025-09-17 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:26.718917 | orchestrator | 2025-09-17 01:08:26 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:26.719746 | orchestrator | 2025-09-17 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:29.756844 | orchestrator | 2025-09-17 01:08:29 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:29.757009 | orchestrator | 2025-09-17 01:08:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:32.804857 | orchestrator | 2025-09-17 01:08:32 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:32.805020 | orchestrator | 2025-09-17 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:35.847375 | orchestrator | 2025-09-17 01:08:35 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:35.847454 | orchestrator | 2025-09-17 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:38.892409 | orchestrator | 2025-09-17 01:08:38 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:38.892509 | orchestrator | 2025-09-17 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:41.947311 | orchestrator | 2025-09-17 01:08:41 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:41.947416 | orchestrator | 2025-09-17 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:45.001049 | orchestrator | 2025-09-17 01:08:44 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:45.001155 | orchestrator | 2025-09-17 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:48.051956 | orchestrator | 2025-09-17 01:08:48 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:48.052061 | orchestrator | 2025-09-17 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:51.089616 | orchestrator | 2025-09-17 01:08:51 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:51.089722 | orchestrator | 2025-09-17 01:08:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:54.126241 | orchestrator | 2025-09-17 01:08:54 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:54.126351 | orchestrator | 2025-09-17 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:08:57.160020 | orchestrator | 2025-09-17 01:08:57 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:08:57.160128 | orchestrator | 2025-09-17 01:08:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:00.206665 | orchestrator | 2025-09-17 01:09:00 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:00.206772 | orchestrator | 2025-09-17 01:09:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:03.253140 | orchestrator | 2025-09-17 01:09:03 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:03.253254 | orchestrator | 2025-09-17 01:09:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:06.298879 | orchestrator | 2025-09-17 01:09:06 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:06.299034 | orchestrator | 2025-09-17 01:09:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:09.340901 | orchestrator | 2025-09-17 01:09:09 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:09.341046 | orchestrator | 2025-09-17 01:09:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:12.390610 | orchestrator | 2025-09-17 01:09:12 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:12.390717 | orchestrator | 2025-09-17 01:09:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:15.441052 | orchestrator | 2025-09-17 01:09:15 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:15.441167 | orchestrator | 2025-09-17 01:09:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:18.485293 | orchestrator | 2025-09-17 01:09:18 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:18.515448 | orchestrator | 2025-09-17 01:09:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:21.530339 | orchestrator | 2025-09-17 01:09:21 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:21.530441 | orchestrator | 2025-09-17 01:09:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:24.574559 | orchestrator | 2025-09-17 01:09:24 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:24.574669 | orchestrator | 2025-09-17 01:09:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:27.619270 | orchestrator | 2025-09-17 01:09:27 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:27.619368 | orchestrator | 2025-09-17 01:09:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:30.663110 | orchestrator | 2025-09-17 01:09:30 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state STARTED 2025-09-17 01:09:30.664069 | orchestrator | 2025-09-17 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 01:09:33.710255 | orchestrator | 2025-09-17 01:09:33 | INFO  | Task 29d2bbd0-5d48-4f26-b71b-eea8389d4d06 is in state SUCCESS 2025-09-17 01:09:33.711483 | orchestrator | 2025-09-17 01:09:33.711522 | orchestrator | 2025-09-17 01:09:33.711613 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 01:09:33.711630 | orchestrator | 2025-09-17 01:09:33.711641 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-17 01:09:33.711653 | orchestrator | Wednesday 17 September 2025 01:01:00 +0000 (0:00:00.421) 0:00:00.421 *** 2025-09-17 01:09:33.711664 | orchestrator | changed: [testbed-manager] 2025-09-17 01:09:33.711676 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.711687 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:09:33.711697 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:09:33.711708 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.711719 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.711730 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.711741 | orchestrator | 2025-09-17 01:09:33.711751 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 01:09:33.711762 | orchestrator | Wednesday 17 September 2025 01:01:01 +0000 (0:00:01.173) 0:00:01.594 *** 2025-09-17 01:09:33.712454 | orchestrator | changed: [testbed-manager] 2025-09-17 01:09:33.712469 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.712480 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:09:33.712491 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:09:33.712501 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.712512 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.712523 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.712534 | orchestrator | 2025-09-17 01:09:33.712545 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 01:09:33.712556 | orchestrator | Wednesday 17 September 2025 01:01:02 +0000 (0:00:00.737) 0:00:02.332 *** 2025-09-17 01:09:33.712567 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-17 01:09:33.712578 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-17 01:09:33.712589 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-17 01:09:33.712599 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-17 01:09:33.712610 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-17 01:09:33.712621 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-17 01:09:33.712631 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-17 01:09:33.712731 | orchestrator | 2025-09-17 01:09:33.712743 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-17 01:09:33.712753 | orchestrator | 2025-09-17 01:09:33.712764 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-17 01:09:33.712775 | orchestrator | Wednesday 17 September 2025 01:01:03 +0000 (0:00:01.001) 0:00:03.334 *** 2025-09-17 01:09:33.712786 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:09:33.712797 | orchestrator | 2025-09-17 01:09:33.712808 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-17 01:09:33.712846 | orchestrator | Wednesday 17 September 2025 01:01:04 +0000 (0:00:00.889) 0:00:04.223 *** 2025-09-17 01:09:33.712858 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-17 01:09:33.712870 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-17 01:09:33.712881 | orchestrator | 2025-09-17 01:09:33.712892 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-17 01:09:33.712903 | orchestrator | Wednesday 17 September 2025 01:01:08 +0000 (0:00:04.784) 0:00:09.007 *** 2025-09-17 01:09:33.712915 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 01:09:33.712946 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 01:09:33.712958 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.712969 | orchestrator | 2025-09-17 01:09:33.712980 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-17 01:09:33.712991 | orchestrator | Wednesday 17 September 2025 01:01:13 +0000 (0:00:04.518) 0:00:13.526 *** 2025-09-17 01:09:33.713001 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.713012 | orchestrator | 2025-09-17 01:09:33.713023 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-17 01:09:33.713048 | orchestrator | Wednesday 17 September 2025 01:01:14 +0000 (0:00:00.692) 0:00:14.219 *** 2025-09-17 01:09:33.713062 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.713074 | orchestrator | 2025-09-17 01:09:33.713087 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-17 01:09:33.713100 | orchestrator | Wednesday 17 September 2025 01:01:15 +0000 (0:00:01.315) 0:00:15.534 *** 2025-09-17 01:09:33.713113 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.713125 | orchestrator | 2025-09-17 01:09:33.713138 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-17 01:09:33.713150 | orchestrator | Wednesday 17 September 2025 01:01:18 +0000 (0:00:02.853) 0:00:18.387 *** 2025-09-17 01:09:33.713163 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.713176 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.713188 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.713200 | orchestrator | 2025-09-17 01:09:33.713212 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-17 01:09:33.713224 | orchestrator | Wednesday 17 September 2025 01:01:18 +0000 (0:00:00.266) 0:00:18.654 *** 2025-09-17 01:09:33.713237 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:09:33.713249 | orchestrator | 2025-09-17 01:09:33.713261 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-17 01:09:33.713273 | orchestrator | Wednesday 17 September 2025 01:01:47 +0000 (0:00:29.439) 0:00:48.093 *** 2025-09-17 01:09:33.713286 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.713778 | orchestrator | 2025-09-17 01:09:33.713793 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-17 01:09:33.713804 | orchestrator | Wednesday 17 September 2025 01:02:03 +0000 (0:00:15.391) 0:01:03.485 *** 2025-09-17 01:09:33.713815 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:09:33.713826 | orchestrator | 2025-09-17 01:09:33.713837 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-17 01:09:33.713848 | orchestrator | Wednesday 17 September 2025 01:02:16 +0000 (0:00:13.349) 0:01:16.834 *** 2025-09-17 01:09:33.713958 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:09:33.713975 | orchestrator | 2025-09-17 01:09:33.713987 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-17 01:09:33.713997 | orchestrator | Wednesday 17 September 2025 01:02:17 +0000 (0:00:00.848) 0:01:17.683 *** 2025-09-17 01:09:33.714008 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.714213 | orchestrator | 2025-09-17 01:09:33.714233 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-17 01:09:33.714244 | orchestrator | Wednesday 17 September 2025 01:02:17 +0000 (0:00:00.416) 0:01:18.099 *** 2025-09-17 01:09:33.714255 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:09:33.714280 | orchestrator | 2025-09-17 01:09:33.714291 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-17 01:09:33.714302 | orchestrator | Wednesday 17 September 2025 01:02:18 +0000 (0:00:00.443) 0:01:18.542 *** 2025-09-17 01:09:33.714313 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:09:33.714323 | orchestrator | 2025-09-17 01:09:33.714334 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-17 01:09:33.714345 | orchestrator | Wednesday 17 September 2025 01:02:36 +0000 (0:00:18.115) 0:01:36.658 *** 2025-09-17 01:09:33.714356 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.714366 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.714377 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.714388 | orchestrator | 2025-09-17 01:09:33.714398 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-17 01:09:33.714409 | orchestrator | 2025-09-17 01:09:33.714420 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-17 01:09:33.714430 | orchestrator | Wednesday 17 September 2025 01:02:36 +0000 (0:00:00.517) 0:01:37.176 *** 2025-09-17 01:09:33.714441 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:09:33.714452 | orchestrator | 2025-09-17 01:09:33.714462 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-17 01:09:33.714473 | orchestrator | Wednesday 17 September 2025 01:02:37 +0000 (0:00:00.937) 0:01:38.113 *** 2025-09-17 01:09:33.714484 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.714494 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.714505 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.714516 | orchestrator | 2025-09-17 01:09:33.714527 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-17 01:09:33.714537 | orchestrator | Wednesday 17 September 2025 01:02:40 +0000 (0:00:02.102) 0:01:40.216 *** 2025-09-17 01:09:33.714548 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.714558 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.714569 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.714580 | orchestrator | 2025-09-17 01:09:33.714590 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-17 01:09:33.714601 | orchestrator | Wednesday 17 September 2025 01:02:42 +0000 (0:00:02.445) 0:01:42.661 *** 2025-09-17 01:09:33.714612 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.714622 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.714633 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.714644 | orchestrator | 2025-09-17 01:09:33.714655 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-17 01:09:33.714666 | orchestrator | Wednesday 17 September 2025 01:02:42 +0000 (0:00:00.270) 0:01:42.932 *** 2025-09-17 01:09:33.714676 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-17 01:09:33.714687 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.714698 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-17 01:09:33.714708 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.714719 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-17 01:09:33.714729 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-17 01:09:33.714740 | orchestrator | 2025-09-17 01:09:33.714751 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-17 01:09:33.714770 | orchestrator | Wednesday 17 September 2025 01:02:51 +0000 (0:00:08.976) 0:01:51.909 *** 2025-09-17 01:09:33.714781 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.714792 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.714802 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.714813 | orchestrator | 2025-09-17 01:09:33.714823 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-17 01:09:33.714834 | orchestrator | Wednesday 17 September 2025 01:02:52 +0000 (0:00:00.319) 0:01:52.229 *** 2025-09-17 01:09:33.714852 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-17 01:09:33.714864 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.714878 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-17 01:09:33.714890 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.714902 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-17 01:09:33.714915 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.714988 | orchestrator | 2025-09-17 01:09:33.715002 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-17 01:09:33.715014 | orchestrator | Wednesday 17 September 2025 01:02:52 +0000 (0:00:00.602) 0:01:52.831 *** 2025-09-17 01:09:33.715027 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.715039 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.715052 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.715064 | orchestrator | 2025-09-17 01:09:33.715076 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-17 01:09:33.715087 | orchestrator | Wednesday 17 September 2025 01:02:53 +0000 (0:00:00.455) 0:01:53.287 *** 2025-09-17 01:09:33.715098 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.715110 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.715120 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.715131 | orchestrator | 2025-09-17 01:09:33.715141 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-17 01:09:33.715151 | orchestrator | Wednesday 17 September 2025 01:02:54 +0000 (0:00:01.026) 0:01:54.313 *** 2025-09-17 01:09:33.715160 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.715170 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.715259 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.715275 | orchestrator | 2025-09-17 01:09:33.715285 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-17 01:09:33.715295 | orchestrator | Wednesday 17 September 2025 01:02:56 +0000 (0:00:02.059) 0:01:56.373 *** 2025-09-17 01:09:33.715305 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.715314 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.715324 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:09:33.715333 | orchestrator | 2025-09-17 01:09:33.715343 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-17 01:09:33.715353 | orchestrator | Wednesday 17 September 2025 01:03:17 +0000 (0:00:20.875) 0:02:17.249 *** 2025-09-17 01:09:33.715363 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.715372 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.715382 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:09:33.715392 | orchestrator | 2025-09-17 01:09:33.715401 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-17 01:09:33.715411 | orchestrator | Wednesday 17 September 2025 01:03:29 +0000 (0:00:12.787) 0:02:30.036 *** 2025-09-17 01:09:33.715421 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:09:33.715430 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.715440 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.715449 | orchestrator | 2025-09-17 01:09:33.715459 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-17 01:09:33.715469 | orchestrator | Wednesday 17 September 2025 01:03:30 +0000 (0:00:01.046) 0:02:31.082 *** 2025-09-17 01:09:33.715478 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.715488 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.715498 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.715507 | orchestrator | 2025-09-17 01:09:33.715517 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-17 01:09:33.715526 | orchestrator | Wednesday 17 September 2025 01:03:43 +0000 (0:00:12.958) 0:02:44.041 *** 2025-09-17 01:09:33.715536 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.715545 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.715555 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.715572 | orchestrator | 2025-09-17 01:09:33.715582 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-17 01:09:33.715592 | orchestrator | Wednesday 17 September 2025 01:03:44 +0000 (0:00:01.085) 0:02:45.127 *** 2025-09-17 01:09:33.715601 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.715611 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.715621 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.715630 | orchestrator | 2025-09-17 01:09:33.715640 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-17 01:09:33.715650 | orchestrator | 2025-09-17 01:09:33.715659 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-17 01:09:33.715669 | orchestrator | Wednesday 17 September 2025 01:03:45 +0000 (0:00:00.417) 0:02:45.545 *** 2025-09-17 01:09:33.715679 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:09:33.715689 | orchestrator | 2025-09-17 01:09:33.715719 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-17 01:09:33.715729 | orchestrator | Wednesday 17 September 2025 01:03:45 +0000 (0:00:00.481) 0:02:46.026 *** 2025-09-17 01:09:33.715739 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-17 01:09:33.715749 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-17 01:09:33.715758 | orchestrator | 2025-09-17 01:09:33.715768 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-17 01:09:33.715778 | orchestrator | Wednesday 17 September 2025 01:03:49 +0000 (0:00:03.310) 0:02:49.336 *** 2025-09-17 01:09:33.715787 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-17 01:09:33.715805 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-17 01:09:33.715816 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-17 01:09:33.715826 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-17 01:09:33.715836 | orchestrator | 2025-09-17 01:09:33.715846 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-17 01:09:33.715855 | orchestrator | Wednesday 17 September 2025 01:03:56 +0000 (0:00:07.090) 0:02:56.427 *** 2025-09-17 01:09:33.715865 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 01:09:33.715875 | orchestrator | 2025-09-17 01:09:33.715888 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-17 01:09:33.715898 | orchestrator | Wednesday 17 September 2025 01:04:00 +0000 (0:00:03.874) 0:03:00.301 *** 2025-09-17 01:09:33.715910 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 01:09:33.715939 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-17 01:09:33.715951 | orchestrator | 2025-09-17 01:09:33.715962 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-17 01:09:33.715973 | orchestrator | Wednesday 17 September 2025 01:04:04 +0000 (0:00:04.049) 0:03:04.351 *** 2025-09-17 01:09:33.715984 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 01:09:33.715994 | orchestrator | 2025-09-17 01:09:33.716005 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-17 01:09:33.716016 | orchestrator | Wednesday 17 September 2025 01:04:07 +0000 (0:00:03.549) 0:03:07.900 *** 2025-09-17 01:09:33.716027 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-17 01:09:33.716037 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-17 01:09:33.716049 | orchestrator | 2025-09-17 01:09:33.716060 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-17 01:09:33.716142 | orchestrator | Wednesday 17 September 2025 01:04:15 +0000 (0:00:07.875) 0:03:15.775 *** 2025-09-17 01:09:33.716163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.716191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.716212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.716257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.716278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.716289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.716299 | orchestrator | 2025-09-17 01:09:33.716309 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-17 01:09:33.716319 | orchestrator | Wednesday 17 September 2025 01:04:16 +0000 (0:00:01.385) 0:03:17.161 *** 2025-09-17 01:09:33.716329 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.716339 | orchestrator | 2025-09-17 01:09:33.716348 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-17 01:09:33.716358 | orchestrator | Wednesday 17 September 2025 01:04:17 +0000 (0:00:00.140) 0:03:17.301 *** 2025-09-17 01:09:33.716367 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.716376 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.716386 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.716395 | orchestrator | 2025-09-17 01:09:33.716405 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-17 01:09:33.716414 | orchestrator | Wednesday 17 September 2025 01:04:17 +0000 (0:00:00.356) 0:03:17.658 *** 2025-09-17 01:09:33.716423 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 01:09:33.716433 | orchestrator | 2025-09-17 01:09:33.716443 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-17 01:09:33.716452 | orchestrator | Wednesday 17 September 2025 01:04:18 +0000 (0:00:00.992) 0:03:18.651 *** 2025-09-17 01:09:33.716462 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.716471 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.716481 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.716490 | orchestrator | 2025-09-17 01:09:33.716499 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-17 01:09:33.716509 | orchestrator | Wednesday 17 September 2025 01:04:18 +0000 (0:00:00.358) 0:03:19.009 *** 2025-09-17 01:09:33.716519 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:09:33.716528 | orchestrator | 2025-09-17 01:09:33.716537 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-17 01:09:33.716552 | orchestrator | Wednesday 17 September 2025 01:04:19 +0000 (0:00:00.608) 0:03:19.618 *** 2025-09-17 01:09:33.716563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.716609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.716624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.716636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.716647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.716725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.716740 | orchestrator | 2025-09-17 01:09:33.716751 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-17 01:09:33.716760 | orchestrator | Wednesday 17 September 2025 01:04:21 +0000 (0:00:02.492) 0:03:22.111 *** 2025-09-17 01:09:33.716771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 01:09:33.716782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.716793 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.716807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 01:09:33.716825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.716835 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.716873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 01:09:33.716887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.716898 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.716909 | orchestrator | 2025-09-17 01:09:33.716919 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-17 01:09:33.716951 | orchestrator | Wednesday 17 September 2025 01:04:22 +0000 (0:00:00.863) 0:03:22.974 *** 2025-09-17 01:09:33.716966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 01:09:33.716984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.716994 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.717036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 01:09:33.717048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.717058 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.717068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 01:09:33.717090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.717100 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.717110 | orchestrator | 2025-09-17 01:09:33.717120 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-17 01:09:33.717129 | orchestrator | Wednesday 17 September 2025 01:04:23 +0000 (0:00:00.830) 0:03:23.805 *** 2025-09-17 01:09:33.717167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.717180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.717196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.717213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.717250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.717263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.717273 | orchestrator | 2025-09-17 01:09:33.717283 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-17 01:09:33.717292 | orchestrator | Wednesday 17 September 2025 01:04:25 +0000 (0:00:02.344) 0:03:26.150 *** 2025-09-17 01:09:33.717302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.717324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.717363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.717375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.717386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.717396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.717412 | orchestrator | 2025-09-17 01:09:33.717421 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-17 01:09:33.717431 | orchestrator | Wednesday 17 September 2025 01:04:31 +0000 (0:00:05.375) 0:03:31.525 *** 2025-09-17 01:09:33.717446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 01:09:33.717482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.717494 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.717504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 01:09:33.717515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.717531 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.717545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 01:09:33.717556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.717566 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.717576 | orchestrator | 2025-09-17 01:09:33.717585 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-17 01:09:33.717595 | orchestrator | Wednesday 17 September 2025 01:04:31 +0000 (0:00:00.547) 0:03:32.073 *** 2025-09-17 01:09:33.717604 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.717614 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:09:33.717623 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:09:33.717633 | orchestrator | 2025-09-17 01:09:33.717669 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-17 01:09:33.717681 | orchestrator | Wednesday 17 September 2025 01:04:33 +0000 (0:00:01.494) 0:03:33.567 *** 2025-09-17 01:09:33.717690 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.717700 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.717709 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.717719 | orchestrator | 2025-09-17 01:09:33.717728 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-17 01:09:33.717738 | orchestrator | Wednesday 17 September 2025 01:04:33 +0000 (0:00:00.323) 0:03:33.890 *** 2025-09-17 01:09:33.717748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.717773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.717811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 01:09:33.717824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.717835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.717850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.717861 | orchestrator | 2025-09-17 01:09:33.717870 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-17 01:09:33.717880 | orchestrator | Wednesday 17 September 2025 01:04:35 +0000 (0:00:02.261) 0:03:36.152 *** 2025-09-17 01:09:33.717890 | orchestrator | 2025-09-17 01:09:33.717899 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-17 01:09:33.717909 | orchestrator | Wednesday 17 September 2025 01:04:36 +0000 (0:00:00.136) 0:03:36.289 *** 2025-09-17 01:09:33.717918 | orchestrator | 2025-09-17 01:09:33.717945 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-17 01:09:33.717955 | orchestrator | Wednesday 17 September 2025 01:04:36 +0000 (0:00:00.132) 0:03:36.421 *** 2025-09-17 01:09:33.717965 | orchestrator | 2025-09-17 01:09:33.717974 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-17 01:09:33.717984 | orchestrator | Wednesday 17 September 2025 01:04:36 +0000 (0:00:00.130) 0:03:36.551 *** 2025-09-17 01:09:33.717993 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.718003 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:09:33.718013 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:09:33.718082 | orchestrator | 2025-09-17 01:09:33.718093 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-17 01:09:33.718102 | orchestrator | Wednesday 17 September 2025 01:04:54 +0000 (0:00:18.384) 0:03:54.936 *** 2025-09-17 01:09:33.718112 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.718127 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:09:33.718136 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:09:33.718146 | orchestrator | 2025-09-17 01:09:33.718156 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-17 01:09:33.718165 | orchestrator | 2025-09-17 01:09:33.718175 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-17 01:09:33.718184 | orchestrator | Wednesday 17 September 2025 01:05:05 +0000 (0:00:10.918) 0:04:05.855 *** 2025-09-17 01:09:33.718194 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:09:33.718204 | orchestrator | 2025-09-17 01:09:33.718213 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-17 01:09:33.718223 | orchestrator | Wednesday 17 September 2025 01:05:06 +0000 (0:00:01.265) 0:04:07.121 *** 2025-09-17 01:09:33.718232 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.718242 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.718251 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.718261 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.718270 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.718279 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.718289 | orchestrator | 2025-09-17 01:09:33.718298 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-17 01:09:33.718308 | orchestrator | Wednesday 17 September 2025 01:05:07 +0000 (0:00:00.620) 0:04:07.741 *** 2025-09-17 01:09:33.718317 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.718327 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.718343 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.718352 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 01:09:33.718362 | orchestrator | 2025-09-17 01:09:33.718371 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-17 01:09:33.718412 | orchestrator | Wednesday 17 September 2025 01:05:08 +0000 (0:00:01.113) 0:04:08.855 *** 2025-09-17 01:09:33.718424 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-17 01:09:33.718433 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-17 01:09:33.718443 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-17 01:09:33.718452 | orchestrator | 2025-09-17 01:09:33.718461 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-17 01:09:33.718471 | orchestrator | Wednesday 17 September 2025 01:05:09 +0000 (0:00:00.708) 0:04:09.563 *** 2025-09-17 01:09:33.718480 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-17 01:09:33.718490 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-17 01:09:33.718499 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-17 01:09:33.718509 | orchestrator | 2025-09-17 01:09:33.718518 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-17 01:09:33.718528 | orchestrator | Wednesday 17 September 2025 01:05:10 +0000 (0:00:01.162) 0:04:10.726 *** 2025-09-17 01:09:33.718537 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-17 01:09:33.718547 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.718556 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-17 01:09:33.718565 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.718574 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-17 01:09:33.718584 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.718593 | orchestrator | 2025-09-17 01:09:33.718603 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-17 01:09:33.718612 | orchestrator | Wednesday 17 September 2025 01:05:11 +0000 (0:00:00.822) 0:04:11.548 *** 2025-09-17 01:09:33.718622 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 01:09:33.718631 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 01:09:33.718640 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.718650 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 01:09:33.718659 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 01:09:33.718669 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.718678 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 01:09:33.718688 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 01:09:33.718697 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.718707 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-17 01:09:33.718716 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-17 01:09:33.718726 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-17 01:09:33.718735 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-17 01:09:33.718744 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-17 01:09:33.718754 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-17 01:09:33.718763 | orchestrator | 2025-09-17 01:09:33.718772 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-17 01:09:33.718782 | orchestrator | Wednesday 17 September 2025 01:05:13 +0000 (0:00:02.098) 0:04:13.647 *** 2025-09-17 01:09:33.718791 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.718801 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.718816 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.718826 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.718835 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.718844 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.718854 | orchestrator | 2025-09-17 01:09:33.718867 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-17 01:09:33.718877 | orchestrator | Wednesday 17 September 2025 01:05:15 +0000 (0:00:01.583) 0:04:15.230 *** 2025-09-17 01:09:33.718887 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.718896 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.718905 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.718915 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.718976 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.718987 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.718997 | orchestrator | 2025-09-17 01:09:33.719006 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-17 01:09:33.719016 | orchestrator | Wednesday 17 September 2025 01:05:16 +0000 (0:00:01.584) 0:04:16.815 *** 2025-09-17 01:09:33.719027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719072 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719129 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719189 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719301 | orchestrator | 2025-09-17 01:09:33.719312 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-17 01:09:33.719321 | orchestrator | Wednesday 17 September 2025 01:05:19 +0000 (0:00:02.370) 0:04:19.185 *** 2025-09-17 01:09:33.719331 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 01:09:33.719347 | orchestrator | 2025-09-17 01:09:33.719357 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-17 01:09:33.719367 | orchestrator | Wednesday 17 September 2025 01:05:20 +0000 (0:00:01.340) 0:04:20.526 *** 2025-09-17 01:09:33.719377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719584 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.719592 | orchestrator | 2025-09-17 01:09:33.719600 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-17 01:09:33.719607 | orchestrator | Wednesday 17 September 2025 01:05:24 +0000 (0:00:03.864) 0:04:24.390 *** 2025-09-17 01:09:33.719636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.719646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.719659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.719668 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.719676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 01:09:33.719688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.719696 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.719725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.719735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.719748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.719756 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.719765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.719778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.719787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.719795 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.719826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 01:09:33.719836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.719849 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.719857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 01:09:33.719865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.719873 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.719881 | orchestrator | 2025-09-17 01:09:33.719889 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-17 01:09:33.719897 | orchestrator | Wednesday 17 September 2025 01:05:25 +0000 (0:00:01.787) 0:04:26.178 *** 2025-09-17 01:09:33.719909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.719918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.719965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.719980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.719989 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.719997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.720005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.720017 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.720025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.720054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.720070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.720078 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.720086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 01:09:33.720095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.720103 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.720115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 01:09:33.720123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.720131 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.720139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 01:09:33.720176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.720186 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.720194 | orchestrator | 2025-09-17 01:09:33.720202 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-17 01:09:33.720210 | orchestrator | Wednesday 17 September 2025 01:05:28 +0000 (0:00:02.257) 0:04:28.435 *** 2025-09-17 01:09:33.720218 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.720226 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.720234 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.720242 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 01:09:33.720250 | orchestrator | 2025-09-17 01:09:33.720257 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-17 01:09:33.720265 | orchestrator | Wednesday 17 September 2025 01:05:29 +0000 (0:00:01.139) 0:04:29.574 *** 2025-09-17 01:09:33.720273 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 01:09:33.720281 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-17 01:09:33.720288 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-17 01:09:33.720296 | orchestrator | 2025-09-17 01:09:33.720304 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-17 01:09:33.720312 | orchestrator | Wednesday 17 September 2025 01:05:30 +0000 (0:00:00.918) 0:04:30.492 *** 2025-09-17 01:09:33.720319 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 01:09:33.720327 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-17 01:09:33.720335 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-17 01:09:33.720342 | orchestrator | 2025-09-17 01:09:33.720350 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-17 01:09:33.720358 | orchestrator | Wednesday 17 September 2025 01:05:31 +0000 (0:00:00.987) 0:04:31.480 *** 2025-09-17 01:09:33.720366 | orchestrator | ok: [testbed-node-3] 2025-09-17 01:09:33.720373 | orchestrator | ok: [testbed-node-4] 2025-09-17 01:09:33.720381 | orchestrator | ok: [testbed-node-5] 2025-09-17 01:09:33.720389 | orchestrator | 2025-09-17 01:09:33.720397 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-17 01:09:33.720404 | orchestrator | Wednesday 17 September 2025 01:05:31 +0000 (0:00:00.517) 0:04:31.998 *** 2025-09-17 01:09:33.720412 | orchestrator | ok: [testbed-node-3] 2025-09-17 01:09:33.720420 | orchestrator | ok: [testbed-node-4] 2025-09-17 01:09:33.720427 | orchestrator | ok: [testbed-node-5] 2025-09-17 01:09:33.720435 | orchestrator | 2025-09-17 01:09:33.720443 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-17 01:09:33.720451 | orchestrator | Wednesday 17 September 2025 01:05:32 +0000 (0:00:00.713) 0:04:32.711 *** 2025-09-17 01:09:33.720458 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-17 01:09:33.720466 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-17 01:09:33.720474 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-17 01:09:33.720482 | orchestrator | 2025-09-17 01:09:33.720489 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-17 01:09:33.720497 | orchestrator | Wednesday 17 September 2025 01:05:33 +0000 (0:00:01.180) 0:04:33.892 *** 2025-09-17 01:09:33.720511 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-17 01:09:33.720519 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-17 01:09:33.720527 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-17 01:09:33.720534 | orchestrator | 2025-09-17 01:09:33.720546 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-17 01:09:33.720554 | orchestrator | Wednesday 17 September 2025 01:05:34 +0000 (0:00:01.225) 0:04:35.118 *** 2025-09-17 01:09:33.720562 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-17 01:09:33.720570 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-17 01:09:33.720577 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-17 01:09:33.720585 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-17 01:09:33.720593 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-17 01:09:33.720601 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-17 01:09:33.720609 | orchestrator | 2025-09-17 01:09:33.720617 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-17 01:09:33.720624 | orchestrator | Wednesday 17 September 2025 01:05:38 +0000 (0:00:03.847) 0:04:38.965 *** 2025-09-17 01:09:33.720632 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.720640 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.720648 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.720655 | orchestrator | 2025-09-17 01:09:33.720663 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-17 01:09:33.720671 | orchestrator | Wednesday 17 September 2025 01:05:39 +0000 (0:00:00.486) 0:04:39.451 *** 2025-09-17 01:09:33.720679 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.720686 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.720694 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.720702 | orchestrator | 2025-09-17 01:09:33.720709 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-17 01:09:33.720717 | orchestrator | Wednesday 17 September 2025 01:05:39 +0000 (0:00:00.347) 0:04:39.798 *** 2025-09-17 01:09:33.720725 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.720733 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.720740 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.720748 | orchestrator | 2025-09-17 01:09:33.720777 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-17 01:09:33.720786 | orchestrator | Wednesday 17 September 2025 01:05:40 +0000 (0:00:01.274) 0:04:41.073 *** 2025-09-17 01:09:33.720794 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-17 01:09:33.720803 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-17 01:09:33.720811 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-17 01:09:33.720819 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-17 01:09:33.720827 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-17 01:09:33.720835 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-17 01:09:33.720843 | orchestrator | 2025-09-17 01:09:33.720850 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-17 01:09:33.720858 | orchestrator | Wednesday 17 September 2025 01:05:44 +0000 (0:00:03.563) 0:04:44.636 *** 2025-09-17 01:09:33.720866 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 01:09:33.720883 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 01:09:33.720891 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 01:09:33.720899 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 01:09:33.720907 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.720915 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 01:09:33.720939 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.720947 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 01:09:33.720955 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.720963 | orchestrator | 2025-09-17 01:09:33.720970 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-17 01:09:33.720978 | orchestrator | Wednesday 17 September 2025 01:05:48 +0000 (0:00:03.734) 0:04:48.371 *** 2025-09-17 01:09:33.720986 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.720994 | orchestrator | 2025-09-17 01:09:33.721002 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-17 01:09:33.721009 | orchestrator | Wednesday 17 September 2025 01:05:48 +0000 (0:00:00.134) 0:04:48.506 *** 2025-09-17 01:09:33.721017 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.721025 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.721033 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.721040 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.721048 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.721056 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.721063 | orchestrator | 2025-09-17 01:09:33.721071 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-17 01:09:33.721079 | orchestrator | Wednesday 17 September 2025 01:05:48 +0000 (0:00:00.593) 0:04:49.099 *** 2025-09-17 01:09:33.721087 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 01:09:33.721094 | orchestrator | 2025-09-17 01:09:33.721102 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-17 01:09:33.721110 | orchestrator | Wednesday 17 September 2025 01:05:49 +0000 (0:00:00.692) 0:04:49.792 *** 2025-09-17 01:09:33.721118 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.721126 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.721133 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.721141 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.721152 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.721160 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.721168 | orchestrator | 2025-09-17 01:09:33.721176 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-17 01:09:33.721183 | orchestrator | Wednesday 17 September 2025 01:05:50 +0000 (0:00:00.941) 0:04:50.734 *** 2025-09-17 01:09:33.721192 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721362 | orchestrator | 2025-09-17 01:09:33.721370 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-17 01:09:33.721378 | orchestrator | Wednesday 17 September 2025 01:05:54 +0000 (0:00:04.168) 0:04:54.902 *** 2025-09-17 01:09:33.721386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.721394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.721406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.721419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.721433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.721442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.721450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.721546 | orchestrator | 2025-09-17 01:09:33.721555 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-17 01:09:33.721568 | orchestrator | Wednesday 17 September 2025 01:06:01 +0000 (0:00:06.413) 0:05:01.315 *** 2025-09-17 01:09:33.721576 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.721584 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.721591 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.721599 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.721607 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.721614 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.721622 | orchestrator | 2025-09-17 01:09:33.721630 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-17 01:09:33.721638 | orchestrator | Wednesday 17 September 2025 01:06:02 +0000 (0:00:01.331) 0:05:02.647 *** 2025-09-17 01:09:33.721645 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-17 01:09:33.721653 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-17 01:09:33.721661 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-17 01:09:33.721669 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-17 01:09:33.721680 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-17 01:09:33.721689 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.721697 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-17 01:09:33.721705 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-17 01:09:33.721713 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-17 01:09:33.721720 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.721728 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-17 01:09:33.721736 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.721744 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-17 01:09:33.721751 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-17 01:09:33.721759 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-17 01:09:33.721767 | orchestrator | 2025-09-17 01:09:33.721774 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-17 01:09:33.721782 | orchestrator | Wednesday 17 September 2025 01:06:06 +0000 (0:00:03.660) 0:05:06.307 *** 2025-09-17 01:09:33.721790 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.721797 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.721805 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.721813 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.721821 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.721828 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.721836 | orchestrator | 2025-09-17 01:09:33.721844 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-17 01:09:33.721852 | orchestrator | Wednesday 17 September 2025 01:06:06 +0000 (0:00:00.614) 0:05:06.922 *** 2025-09-17 01:09:33.721860 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-17 01:09:33.721867 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-17 01:09:33.721875 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-17 01:09:33.721883 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-17 01:09:33.721891 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-17 01:09:33.721904 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-17 01:09:33.721912 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-17 01:09:33.721955 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-17 01:09:33.721965 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-17 01:09:33.721973 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-17 01:09:33.721980 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.721992 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-17 01:09:33.722000 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.722008 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-17 01:09:33.722038 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.722048 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-17 01:09:33.722056 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-17 01:09:33.722063 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-17 01:09:33.722071 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-17 01:09:33.722079 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-17 01:09:33.722086 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-17 01:09:33.722094 | orchestrator | 2025-09-17 01:09:33.722102 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-17 01:09:33.722110 | orchestrator | Wednesday 17 September 2025 01:06:12 +0000 (0:00:05.355) 0:05:12.278 *** 2025-09-17 01:09:33.722117 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 01:09:33.722125 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 01:09:33.722137 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 01:09:33.722146 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-17 01:09:33.722154 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 01:09:33.722162 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-17 01:09:33.722169 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 01:09:33.722177 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 01:09:33.722185 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-17 01:09:33.722193 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 01:09:33.722201 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 01:09:33.722208 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-17 01:09:33.722216 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.722224 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 01:09:33.722232 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-17 01:09:33.722248 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.722256 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-17 01:09:33.722264 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.722272 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 01:09:33.722279 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 01:09:33.722287 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 01:09:33.722295 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 01:09:33.722303 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 01:09:33.722310 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 01:09:33.722318 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 01:09:33.722326 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 01:09:33.722334 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 01:09:33.722341 | orchestrator | 2025-09-17 01:09:33.722349 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-17 01:09:33.722357 | orchestrator | Wednesday 17 September 2025 01:06:19 +0000 (0:00:07.205) 0:05:19.484 *** 2025-09-17 01:09:33.722365 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.722373 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.722380 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.722388 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.722396 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.722404 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.722411 | orchestrator | 2025-09-17 01:09:33.722417 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-17 01:09:33.722424 | orchestrator | Wednesday 17 September 2025 01:06:20 +0000 (0:00:00.858) 0:05:20.343 *** 2025-09-17 01:09:33.722430 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.722441 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.722447 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.722454 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.722461 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.722467 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.722474 | orchestrator | 2025-09-17 01:09:33.722481 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-17 01:09:33.722487 | orchestrator | Wednesday 17 September 2025 01:06:20 +0000 (0:00:00.651) 0:05:20.994 *** 2025-09-17 01:09:33.722494 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.722500 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.722507 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.722514 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.722520 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.722527 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.722533 | orchestrator | 2025-09-17 01:09:33.722540 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-17 01:09:33.722547 | orchestrator | Wednesday 17 September 2025 01:06:23 +0000 (0:00:02.511) 0:05:23.505 *** 2025-09-17 01:09:33.722557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.722569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.722576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.722583 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.722590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.722601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.722608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.722619 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.722630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 01:09:33.722637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 01:09:33.722644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.722651 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.722658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 01:09:33.722668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.722675 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.722682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 01:09:33.722696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 01:09:33.722704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.722711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 01:09:33.722718 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.722725 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.722731 | orchestrator | 2025-09-17 01:09:33.722738 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-17 01:09:33.722745 | orchestrator | Wednesday 17 September 2025 01:06:24 +0000 (0:00:01.376) 0:05:24.882 *** 2025-09-17 01:09:33.722751 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-17 01:09:33.722758 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-17 01:09:33.722765 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.722771 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-17 01:09:33.722778 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-17 01:09:33.722785 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.722791 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-17 01:09:33.722798 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-17 01:09:33.722804 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.722811 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-17 01:09:33.722817 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-17 01:09:33.722824 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.722831 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-17 01:09:33.722837 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-17 01:09:33.722844 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.722854 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-17 01:09:33.722860 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-17 01:09:33.722871 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.722877 | orchestrator | 2025-09-17 01:09:33.722884 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-17 01:09:33.722891 | orchestrator | Wednesday 17 September 2025 01:06:25 +0000 (0:00:01.066) 0:05:25.948 *** 2025-09-17 01:09:33.722898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.722908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.722916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 01:09:33.722935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.722945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.722957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.722964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.722975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 01:09:33.722983 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 01:09:33.722990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.722997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.723011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.723018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.723029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.723036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 01:09:33.723043 | orchestrator | 2025-09-17 01:09:33.723050 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-17 01:09:33.723057 | orchestrator | Wednesday 17 September 2025 01:06:28 +0000 (0:00:02.644) 0:05:28.593 *** 2025-09-17 01:09:33.723063 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.723070 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.723077 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.723083 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.723090 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.723097 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.723103 | orchestrator | 2025-09-17 01:09:33.723110 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 01:09:33.723117 | orchestrator | Wednesday 17 September 2025 01:06:29 +0000 (0:00:00.809) 0:05:29.402 *** 2025-09-17 01:09:33.723123 | orchestrator | 2025-09-17 01:09:33.723130 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 01:09:33.723136 | orchestrator | Wednesday 17 September 2025 01:06:29 +0000 (0:00:00.150) 0:05:29.553 *** 2025-09-17 01:09:33.723143 | orchestrator | 2025-09-17 01:09:33.723154 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 01:09:33.723161 | orchestrator | Wednesday 17 September 2025 01:06:29 +0000 (0:00:00.135) 0:05:29.689 *** 2025-09-17 01:09:33.723167 | orchestrator | 2025-09-17 01:09:33.723174 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 01:09:33.723180 | orchestrator | Wednesday 17 September 2025 01:06:29 +0000 (0:00:00.129) 0:05:29.818 *** 2025-09-17 01:09:33.723187 | orchestrator | 2025-09-17 01:09:33.723193 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 01:09:33.723200 | orchestrator | Wednesday 17 September 2025 01:06:29 +0000 (0:00:00.137) 0:05:29.956 *** 2025-09-17 01:09:33.723207 | orchestrator | 2025-09-17 01:09:33.723213 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 01:09:33.723220 | orchestrator | Wednesday 17 September 2025 01:06:29 +0000 (0:00:00.128) 0:05:30.085 *** 2025-09-17 01:09:33.723226 | orchestrator | 2025-09-17 01:09:33.723233 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-17 01:09:33.723239 | orchestrator | Wednesday 17 September 2025 01:06:30 +0000 (0:00:00.297) 0:05:30.383 *** 2025-09-17 01:09:33.723246 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.723253 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:09:33.723259 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:09:33.723266 | orchestrator | 2025-09-17 01:09:33.723272 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-17 01:09:33.723282 | orchestrator | Wednesday 17 September 2025 01:06:42 +0000 (0:00:12.047) 0:05:42.430 *** 2025-09-17 01:09:33.723289 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.723295 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:09:33.723302 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:09:33.723308 | orchestrator | 2025-09-17 01:09:33.723315 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-17 01:09:33.723322 | orchestrator | Wednesday 17 September 2025 01:06:58 +0000 (0:00:16.742) 0:05:59.172 *** 2025-09-17 01:09:33.723328 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.723335 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.723342 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.723348 | orchestrator | 2025-09-17 01:09:33.723355 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-17 01:09:33.723361 | orchestrator | Wednesday 17 September 2025 01:07:23 +0000 (0:00:24.785) 0:06:23.958 *** 2025-09-17 01:09:33.723368 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.723375 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.723381 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.723388 | orchestrator | 2025-09-17 01:09:33.723394 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-17 01:09:33.723401 | orchestrator | Wednesday 17 September 2025 01:08:02 +0000 (0:00:38.266) 0:07:02.224 *** 2025-09-17 01:09:33.723408 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.723414 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.723421 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.723427 | orchestrator | 2025-09-17 01:09:33.723434 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-17 01:09:33.723441 | orchestrator | Wednesday 17 September 2025 01:08:02 +0000 (0:00:00.850) 0:07:03.075 *** 2025-09-17 01:09:33.723447 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.723454 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.723460 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.723467 | orchestrator | 2025-09-17 01:09:33.723474 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-17 01:09:33.723483 | orchestrator | Wednesday 17 September 2025 01:08:03 +0000 (0:00:00.781) 0:07:03.857 *** 2025-09-17 01:09:33.723490 | orchestrator | changed: [testbed-node-3] 2025-09-17 01:09:33.723497 | orchestrator | changed: [testbed-node-5] 2025-09-17 01:09:33.723503 | orchestrator | changed: [testbed-node-4] 2025-09-17 01:09:33.723552 | orchestrator | 2025-09-17 01:09:33.723559 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-17 01:09:33.723566 | orchestrator | Wednesday 17 September 2025 01:08:23 +0000 (0:00:19.632) 0:07:23.489 *** 2025-09-17 01:09:33.723573 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.723579 | orchestrator | 2025-09-17 01:09:33.723586 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-17 01:09:33.723592 | orchestrator | Wednesday 17 September 2025 01:08:23 +0000 (0:00:00.103) 0:07:23.593 *** 2025-09-17 01:09:33.723599 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.723607 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.723619 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.723629 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.723641 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.723658 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-17 01:09:33.723669 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-17 01:09:33.723680 | orchestrator | 2025-09-17 01:09:33.723690 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-17 01:09:33.723699 | orchestrator | Wednesday 17 September 2025 01:08:45 +0000 (0:00:22.333) 0:07:45.926 *** 2025-09-17 01:09:33.723710 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.723720 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.723729 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.723739 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.723749 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.723759 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.723770 | orchestrator | 2025-09-17 01:09:33.723780 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-17 01:09:33.723791 | orchestrator | Wednesday 17 September 2025 01:08:54 +0000 (0:00:08.799) 0:07:54.726 *** 2025-09-17 01:09:33.723801 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.723812 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.723824 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.723832 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.723838 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.723845 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-09-17 01:09:33.723852 | orchestrator | 2025-09-17 01:09:33.723858 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-17 01:09:33.723865 | orchestrator | Wednesday 17 September 2025 01:08:58 +0000 (0:00:03.894) 0:07:58.620 *** 2025-09-17 01:09:33.723872 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-17 01:09:33.723878 | orchestrator | 2025-09-17 01:09:33.723885 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-17 01:09:33.723891 | orchestrator | Wednesday 17 September 2025 01:09:11 +0000 (0:00:12.601) 0:08:11.222 *** 2025-09-17 01:09:33.723898 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-17 01:09:33.723905 | orchestrator | 2025-09-17 01:09:33.723911 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-17 01:09:33.723918 | orchestrator | Wednesday 17 September 2025 01:09:12 +0000 (0:00:01.294) 0:08:12.516 *** 2025-09-17 01:09:33.723961 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.723968 | orchestrator | 2025-09-17 01:09:33.723975 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-17 01:09:33.723981 | orchestrator | Wednesday 17 September 2025 01:09:13 +0000 (0:00:01.282) 0:08:13.799 *** 2025-09-17 01:09:33.723988 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-17 01:09:33.723995 | orchestrator | 2025-09-17 01:09:33.724008 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-17 01:09:33.724015 | orchestrator | Wednesday 17 September 2025 01:09:25 +0000 (0:00:11.753) 0:08:25.553 *** 2025-09-17 01:09:33.724028 | orchestrator | ok: [testbed-node-3] 2025-09-17 01:09:33.724035 | orchestrator | ok: [testbed-node-5] 2025-09-17 01:09:33.724042 | orchestrator | ok: [testbed-node-4] 2025-09-17 01:09:33.724049 | orchestrator | ok: [testbed-node-0] 2025-09-17 01:09:33.724055 | orchestrator | ok: [testbed-node-1] 2025-09-17 01:09:33.724062 | orchestrator | ok: [testbed-node-2] 2025-09-17 01:09:33.724069 | orchestrator | 2025-09-17 01:09:33.724076 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-17 01:09:33.724082 | orchestrator | 2025-09-17 01:09:33.724089 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-17 01:09:33.724096 | orchestrator | Wednesday 17 September 2025 01:09:27 +0000 (0:00:01.866) 0:08:27.419 *** 2025-09-17 01:09:33.724103 | orchestrator | changed: [testbed-node-0] 2025-09-17 01:09:33.724109 | orchestrator | changed: [testbed-node-1] 2025-09-17 01:09:33.724116 | orchestrator | changed: [testbed-node-2] 2025-09-17 01:09:33.724123 | orchestrator | 2025-09-17 01:09:33.724129 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-17 01:09:33.724135 | orchestrator | 2025-09-17 01:09:33.724142 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-17 01:09:33.724148 | orchestrator | Wednesday 17 September 2025 01:09:28 +0000 (0:00:01.115) 0:08:28.535 *** 2025-09-17 01:09:33.724154 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.724161 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.724167 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.724173 | orchestrator | 2025-09-17 01:09:33.724179 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-17 01:09:33.724185 | orchestrator | 2025-09-17 01:09:33.724192 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-17 01:09:33.724198 | orchestrator | Wednesday 17 September 2025 01:09:28 +0000 (0:00:00.508) 0:08:29.043 *** 2025-09-17 01:09:33.724204 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-17 01:09:33.724216 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-17 01:09:33.724223 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-17 01:09:33.724229 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-17 01:09:33.724235 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-17 01:09:33.724241 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-17 01:09:33.724248 | orchestrator | skipping: [testbed-node-3] 2025-09-17 01:09:33.724254 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-17 01:09:33.724260 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-17 01:09:33.724267 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-17 01:09:33.724273 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-17 01:09:33.724279 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-17 01:09:33.724285 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-17 01:09:33.724292 | orchestrator | skipping: [testbed-node-4] 2025-09-17 01:09:33.724298 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-17 01:09:33.724304 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-17 01:09:33.724310 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-17 01:09:33.724317 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-17 01:09:33.724323 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-17 01:09:33.724329 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-17 01:09:33.724335 | orchestrator | skipping: [testbed-node-5] 2025-09-17 01:09:33.724342 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-17 01:09:33.724348 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-17 01:09:33.724354 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-17 01:09:33.724364 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-17 01:09:33.724371 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-17 01:09:33.724377 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-17 01:09:33.724383 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.724390 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-17 01:09:33.724396 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-17 01:09:33.724402 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-17 01:09:33.724408 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-17 01:09:33.724415 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-17 01:09:33.724421 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-17 01:09:33.724427 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.724433 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-17 01:09:33.724439 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-17 01:09:33.724446 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-17 01:09:33.724452 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-17 01:09:33.724458 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-17 01:09:33.724464 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-17 01:09:33.724470 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.724477 | orchestrator | 2025-09-17 01:09:33.724483 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-17 01:09:33.724489 | orchestrator | 2025-09-17 01:09:33.724495 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-17 01:09:33.724501 | orchestrator | Wednesday 17 September 2025 01:09:30 +0000 (0:00:01.370) 0:08:30.414 *** 2025-09-17 01:09:33.724511 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-17 01:09:33.724518 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-17 01:09:33.724524 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.724530 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-17 01:09:33.724536 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-17 01:09:33.724542 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.724549 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-17 01:09:33.724555 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-17 01:09:33.724561 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.724567 | orchestrator | 2025-09-17 01:09:33.724573 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-17 01:09:33.724580 | orchestrator | 2025-09-17 01:09:33.724586 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-17 01:09:33.724592 | orchestrator | Wednesday 17 September 2025 01:09:30 +0000 (0:00:00.743) 0:08:31.157 *** 2025-09-17 01:09:33.724598 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.724604 | orchestrator | 2025-09-17 01:09:33.724610 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-17 01:09:33.724617 | orchestrator | 2025-09-17 01:09:33.724623 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-17 01:09:33.724629 | orchestrator | Wednesday 17 September 2025 01:09:31 +0000 (0:00:00.702) 0:08:31.859 *** 2025-09-17 01:09:33.724635 | orchestrator | skipping: [testbed-node-0] 2025-09-17 01:09:33.724641 | orchestrator | skipping: [testbed-node-1] 2025-09-17 01:09:33.724648 | orchestrator | skipping: [testbed-node-2] 2025-09-17 01:09:33.724654 | orchestrator | 2025-09-17 01:09:33.724660 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 01:09:33.724667 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 01:09:33.724681 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-17 01:09:33.724688 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-17 01:09:33.724694 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-17 01:09:33.724700 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-17 01:09:33.724707 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-17 01:09:33.724713 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-17 01:09:33.724719 | orchestrator | 2025-09-17 01:09:33.724725 | orchestrator | 2025-09-17 01:09:33.724732 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 01:09:33.724738 | orchestrator | Wednesday 17 September 2025 01:09:32 +0000 (0:00:00.486) 0:08:32.346 *** 2025-09-17 01:09:33.724744 | orchestrator | =============================================================================== 2025-09-17 01:09:33.724751 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.27s 2025-09-17 01:09:33.724757 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.44s 2025-09-17 01:09:33.724763 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.79s 2025-09-17 01:09:33.724769 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.33s 2025-09-17 01:09:33.724776 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.88s 2025-09-17 01:09:33.724782 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 19.63s 2025-09-17 01:09:33.724788 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.38s 2025-09-17 01:09:33.724794 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.12s 2025-09-17 01:09:33.724801 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.74s 2025-09-17 01:09:33.724807 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.39s 2025-09-17 01:09:33.724813 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.35s 2025-09-17 01:09:33.724819 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.96s 2025-09-17 01:09:33.724825 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.79s 2025-09-17 01:09:33.724832 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.60s 2025-09-17 01:09:33.724838 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.05s 2025-09-17 01:09:33.724844 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.75s 2025-09-17 01:09:33.724850 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.92s 2025-09-17 01:09:33.724857 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.98s 2025-09-17 01:09:33.724863 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.80s 2025-09-17 01:09:33.724872 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.88s 2025-09-17 01:09:33.724879 | orchestrator | 2025-09-17 01:09:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:09:36.747917 | orchestrator | 2025-09-17 01:09:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:09:39.789411 | orchestrator | 2025-09-17 01:09:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:09:42.830760 | orchestrator | 2025-09-17 01:09:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:09:45.869573 | orchestrator | 2025-09-17 01:09:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:09:48.903454 | orchestrator | 2025-09-17 01:09:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:09:51.944381 | orchestrator | 2025-09-17 01:09:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:09:54.988984 | orchestrator | 2025-09-17 01:09:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:09:58.034548 | orchestrator | 2025-09-17 01:09:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:01.076889 | orchestrator | 2025-09-17 01:10:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:04.121313 | orchestrator | 2025-09-17 01:10:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:07.166812 | orchestrator | 2025-09-17 01:10:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:10.210221 | orchestrator | 2025-09-17 01:10:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:13.252552 | orchestrator | 2025-09-17 01:10:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:16.288891 | orchestrator | 2025-09-17 01:10:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:19.331938 | orchestrator | 2025-09-17 01:10:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:22.378483 | orchestrator | 2025-09-17 01:10:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:25.420098 | orchestrator | 2025-09-17 01:10:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:28.462639 | orchestrator | 2025-09-17 01:10:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:31.506296 | orchestrator | 2025-09-17 01:10:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 01:10:34.543298 | orchestrator | 2025-09-17 01:10:34.858579 | orchestrator | 2025-09-17 01:10:34.863086 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Sep 17 01:10:34 UTC 2025 2025-09-17 01:10:34.863126 | orchestrator | 2025-09-17 01:10:35.138622 | orchestrator | ok: Runtime: 0:34:00.107876 2025-09-17 01:10:35.396435 | 2025-09-17 01:10:35.396637 | TASK [Bootstrap services] 2025-09-17 01:10:36.100233 | orchestrator | 2025-09-17 01:10:36.100423 | orchestrator | # BOOTSTRAP 2025-09-17 01:10:36.100445 | orchestrator | 2025-09-17 01:10:36.100459 | orchestrator | + set -e 2025-09-17 01:10:36.100472 | orchestrator | + echo 2025-09-17 01:10:36.100486 | orchestrator | + echo '# BOOTSTRAP' 2025-09-17 01:10:36.100505 | orchestrator | + echo 2025-09-17 01:10:36.100549 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-17 01:10:36.109040 | orchestrator | + set -e 2025-09-17 01:10:36.109072 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-17 01:10:40.143627 | orchestrator | 2025-09-17 01:10:40 | INFO  | It takes a moment until task 15997b0a-a26b-4bca-9172-026d629f575f (flavor-manager) has been started and output is visible here. 2025-09-17 01:10:43.600412 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-17 01:10:43.602604 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:194 │ 2025-09-17 01:10:43.602641 | orchestrator | │ in run │ 2025-09-17 01:10:43.602654 | orchestrator | │ │ 2025-09-17 01:10:43.602665 | orchestrator | │ 191 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-17 01:10:43.602690 | orchestrator | │ 192 │ │ 2025-09-17 01:10:43.602701 | orchestrator | │ 193 │ definitions = get_flavor_definitions(name, url) │ 2025-09-17 01:10:43.602716 | orchestrator | │ ❱ 194 │ manager = FlavorManager( │ 2025-09-17 01:10:43.602727 | orchestrator | │ 195 │ │ cloud=Cloud(cloud), │ 2025-09-17 01:10:43.602738 | orchestrator | │ 196 │ │ definitions=definitions, │ 2025-09-17 01:10:43.602749 | orchestrator | │ 197 │ │ recommended=recommended, │ 2025-09-17 01:10:43.602760 | orchestrator | │ │ 2025-09-17 01:10:43.602772 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-17 01:10:43.602816 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-17 01:10:43.602828 | orchestrator | │ │ debug = False │ │ 2025-09-17 01:10:43.602839 | orchestrator | │ │ definitions = { │ │ 2025-09-17 01:10:43.602850 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-17 01:10:43.602861 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-17 01:10:43.602872 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-17 01:10:43.602883 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-17 01:10:43.602894 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-17 01:10:43.602905 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-17 01:10:43.602916 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-17 01:10:43.602965 | orchestrator | │ │ │ ], │ │ 2025-09-17 01:10:43.602977 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-17 01:10:43.602987 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.602998 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-17 01:10:43.603057 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.603070 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-17 01:10:43.603081 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-17 01:10:43.603091 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-17 01:10:43.603102 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.603113 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-17 01:10:43.603124 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-17 01:10:43.603135 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.603145 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.603156 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.603167 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-17 01:10:43.603177 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.603188 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-17 01:10:43.603198 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-17 01:10:43.603209 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-17 01:10:43.603252 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.603265 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-17 01:10:43.603276 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-17 01:10:43.603286 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.603297 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.603307 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.603318 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-17 01:10:43.603346 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.603358 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-17 01:10:43.603369 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-17 01:10:43.603379 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.603391 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.603402 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-17 01:10:43.603412 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-17 01:10:43.603423 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.603434 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.603445 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.603455 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-17 01:10:43.603466 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.603485 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-17 01:10:43.603496 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-17 01:10:43.603506 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.603517 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.603528 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-17 01:10:43.603538 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-17 01:10:43.603549 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.603559 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.603570 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.603581 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-17 01:10:43.603591 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.603602 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-17 01:10:43.603612 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-17 01:10:43.603623 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.603634 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.603644 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-17 01:10:43.603655 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-17 01:10:43.603665 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.603676 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.603687 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.603698 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-17 01:10:43.603708 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.603724 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-17 01:10:43.603735 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-17 01:10:43.603764 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.628787 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.628809 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-17 01:10:43.628820 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-17 01:10:43.628829 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.628839 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.628849 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.628858 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-17 01:10:43.628868 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.628902 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-17 01:10:43.628913 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-17 01:10:43.628943 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.628953 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.628963 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-17 01:10:43.628972 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-17 01:10:43.628982 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.628991 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.629001 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.629010 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-17 01:10:43.629020 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.629029 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-17 01:10:43.629039 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-17 01:10:43.629049 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.629058 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.629068 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-17 01:10:43.629078 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-17 01:10:43.629087 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.629097 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.629106 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.629116 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-17 01:10:43.629125 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-17 01:10:43.629136 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-17 01:10:43.629145 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-17 01:10:43.629155 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.629164 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.629174 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-17 01:10:43.629184 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-17 01:10:43.629193 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.629209 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.629219 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.629229 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-17 01:10:43.629238 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-17 01:10:43.629264 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-17 01:10:43.629274 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-17 01:10:43.629292 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.629302 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.629311 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-17 01:10:43.629321 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-17 01:10:43.629331 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.629340 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.629350 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-17 01:10:43.629359 | orchestrator | │ │ │ ] │ │ 2025-09-17 01:10:43.629369 | orchestrator | │ │ } │ │ 2025-09-17 01:10:43.629378 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-17 01:10:43.629388 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-17 01:10:43.629398 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-17 01:10:43.629408 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-17 01:10:43.629417 | orchestrator | │ │ name = 'local' │ │ 2025-09-17 01:10:43.629427 | orchestrator | │ │ recommended = True │ │ 2025-09-17 01:10:43.629436 | orchestrator | │ │ url = None │ │ 2025-09-17 01:10:43.629447 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-17 01:10:43.629458 | orchestrator | │ │ 2025-09-17 01:10:43.629468 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:101 │ 2025-09-17 01:10:43.629478 | orchestrator | │ in __init__ │ 2025-09-17 01:10:43.629487 | orchestrator | │ │ 2025-09-17 01:10:43.629497 | orchestrator | │ 98 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-17 01:10:43.629506 | orchestrator | │ 99 │ │ self.cloud = cloud │ 2025-09-17 01:10:43.629516 | orchestrator | │ 100 │ │ if recommended: │ 2025-09-17 01:10:43.629525 | orchestrator | │ ❱ 101 │ │ │ recommended_flavors = definitions["recommended"] │ 2025-09-17 01:10:43.629535 | orchestrator | │ 102 │ │ │ # Filter recommended flavors based on memory limit │ 2025-09-17 01:10:43.629544 | orchestrator | │ 103 │ │ │ limit_memory_mb = limit_memory * 1024 │ 2025-09-17 01:10:43.629554 | orchestrator | │ 104 │ │ │ filtered_recommended = [ │ 2025-09-17 01:10:43.629564 | orchestrator | │ │ 2025-09-17 01:10:43.629594 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-17 01:10:43.629611 | orchestrator | │ │ cloud = │ │ 2025-09-17 01:10:43.629631 | orchestrator | │ │ definitions = { │ │ 2025-09-17 01:10:43.629641 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-17 01:10:43.629650 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-17 01:10:43.629660 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-17 01:10:43.629670 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-17 01:10:43.629679 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-17 01:10:43.629689 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-17 01:10:43.629699 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-17 01:10:43.629708 | orchestrator | │ │ │ ], │ │ 2025-09-17 01:10:43.629718 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-17 01:10:43.629732 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.654478 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-17 01:10:43.654497 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.654508 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-17 01:10:43.654517 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-17 01:10:43.654527 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-17 01:10:43.654537 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.654546 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-17 01:10:43.654556 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-17 01:10:43.654566 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.654576 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.654585 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.654595 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-17 01:10:43.654604 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.654614 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-17 01:10:43.654623 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-17 01:10:43.654633 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-17 01:10:43.654642 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.654652 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-17 01:10:43.654661 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-17 01:10:43.654671 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.654704 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.654715 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.654725 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-17 01:10:43.654734 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.654744 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-17 01:10:43.654753 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-17 01:10:43.654763 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.654772 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.654782 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-17 01:10:43.654791 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-17 01:10:43.654801 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.654810 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.654836 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.654847 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-17 01:10:43.654857 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.654867 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-17 01:10:43.654877 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-17 01:10:43.654886 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.654896 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.654905 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-17 01:10:43.654915 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-17 01:10:43.654942 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.654952 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.654968 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.654979 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-17 01:10:43.654988 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.654998 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-17 01:10:43.655008 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-17 01:10:43.655017 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.655027 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.655037 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-17 01:10:43.655046 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-17 01:10:43.655056 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.655071 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.655081 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.655090 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-17 01:10:43.655100 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.655109 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-17 01:10:43.655119 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-17 01:10:43.655129 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.655138 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.655148 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-17 01:10:43.655157 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-17 01:10:43.655167 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.655177 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.655186 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.655196 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-17 01:10:43.655205 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.655215 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-17 01:10:43.655224 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-17 01:10:43.655234 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.655244 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.655254 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-17 01:10:43.655263 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-17 01:10:43.655273 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.655282 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.655292 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.655302 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-17 01:10:43.655312 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-17 01:10:43.655321 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-17 01:10:43.655331 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-17 01:10:43.655340 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.655350 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.655359 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-17 01:10:43.655369 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-17 01:10:43.655379 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.655413 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.707210 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.707247 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-17 01:10:43.707287 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-17 01:10:43.707299 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-17 01:10:43.707308 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-17 01:10:43.707319 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.707328 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.707338 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-17 01:10:43.707347 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-17 01:10:43.707357 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.707366 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.707376 | orchestrator | │ │ │ │ { │ │ 2025-09-17 01:10:43.707386 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-17 01:10:43.707395 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-17 01:10:43.707405 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-17 01:10:43.707414 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-17 01:10:43.707424 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-17 01:10:43.707433 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-17 01:10:43.707443 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-17 01:10:43.707452 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-17 01:10:43.707462 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-17 01:10:43.707471 | orchestrator | │ │ │ │ }, │ │ 2025-09-17 01:10:43.707481 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-17 01:10:43.707491 | orchestrator | │ │ │ ] │ │ 2025-09-17 01:10:43.707500 | orchestrator | │ │ } │ │ 2025-09-17 01:10:43.707510 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-17 01:10:43.707520 | orchestrator | │ │ recommended = True │ │ 2025-09-17 01:10:43.707529 | orchestrator | │ │ self = │ │ 2025-09-17 01:10:43.707549 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-17 01:10:43.707561 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-17 01:10:43.707595 | orchestrator | KeyError: 'recommended' 2025-09-17 01:10:44.443662 | orchestrator | ERROR 2025-09-17 01:10:44.444217 | orchestrator | { 2025-09-17 01:10:44.444367 | orchestrator | "delta": "0:00:08.313011", 2025-09-17 01:10:44.444439 | orchestrator | "end": "2025-09-17 01:10:44.003202", 2025-09-17 01:10:44.444501 | orchestrator | "msg": "non-zero return code", 2025-09-17 01:10:44.444558 | orchestrator | "rc": 1, 2025-09-17 01:10:44.444610 | orchestrator | "start": "2025-09-17 01:10:35.690191" 2025-09-17 01:10:44.444661 | orchestrator | } failure 2025-09-17 01:10:44.463870 | 2025-09-17 01:10:44.464087 | PLAY RECAP 2025-09-17 01:10:44.464189 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-17 01:10:44.464277 | 2025-09-17 01:10:44.714720 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-17 01:10:44.715862 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-17 01:10:45.477729 | 2025-09-17 01:10:45.477887 | PLAY [Post output play] 2025-09-17 01:10:45.493471 | 2025-09-17 01:10:45.493604 | LOOP [stage-output : Register sources] 2025-09-17 01:10:45.565725 | 2025-09-17 01:10:45.566078 | TASK [stage-output : Check sudo] 2025-09-17 01:10:46.423294 | orchestrator | sudo: a password is required 2025-09-17 01:10:46.603069 | orchestrator | ok: Runtime: 0:00:00.014157 2025-09-17 01:10:46.619006 | 2025-09-17 01:10:46.619171 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-17 01:10:46.660019 | 2025-09-17 01:10:46.660318 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-17 01:10:46.730092 | orchestrator | ok 2025-09-17 01:10:46.739303 | 2025-09-17 01:10:46.739446 | LOOP [stage-output : Ensure target folders exist] 2025-09-17 01:10:47.171626 | orchestrator | ok: "docs" 2025-09-17 01:10:47.172009 | 2025-09-17 01:10:47.381007 | orchestrator | ok: "artifacts" 2025-09-17 01:10:47.593925 | orchestrator | ok: "logs" 2025-09-17 01:10:47.617479 | 2025-09-17 01:10:47.617663 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-17 01:10:47.653967 | 2025-09-17 01:10:47.654289 | TASK [stage-output : Make all log files readable] 2025-09-17 01:10:47.895390 | orchestrator | ok 2025-09-17 01:10:47.904890 | 2025-09-17 01:10:47.905017 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-17 01:10:47.940458 | orchestrator | skipping: Conditional result was False 2025-09-17 01:10:47.956999 | 2025-09-17 01:10:47.957154 | TASK [stage-output : Discover log files for compression] 2025-09-17 01:10:47.982255 | orchestrator | skipping: Conditional result was False 2025-09-17 01:10:47.996162 | 2025-09-17 01:10:47.996371 | LOOP [stage-output : Archive everything from logs] 2025-09-17 01:10:48.040975 | 2025-09-17 01:10:48.041149 | PLAY [Post cleanup play] 2025-09-17 01:10:48.050172 | 2025-09-17 01:10:48.050333 | TASK [Set cloud fact (Zuul deployment)] 2025-09-17 01:10:48.107413 | orchestrator | ok 2025-09-17 01:10:48.119392 | 2025-09-17 01:10:48.119508 | TASK [Set cloud fact (local deployment)] 2025-09-17 01:10:48.153841 | orchestrator | skipping: Conditional result was False 2025-09-17 01:10:48.166732 | 2025-09-17 01:10:48.166884 | TASK [Clean the cloud environment] 2025-09-17 01:10:48.694118 | orchestrator | 2025-09-17 01:10:48 - clean up servers 2025-09-17 01:10:49.434252 | orchestrator | 2025-09-17 01:10:49 - testbed-manager 2025-09-17 01:10:49.515355 | orchestrator | 2025-09-17 01:10:49 - testbed-node-0 2025-09-17 01:10:49.601149 | orchestrator | 2025-09-17 01:10:49 - testbed-node-4 2025-09-17 01:10:49.688184 | orchestrator | 2025-09-17 01:10:49 - testbed-node-3 2025-09-17 01:10:49.781694 | orchestrator | 2025-09-17 01:10:49 - testbed-node-1 2025-09-17 01:10:49.867679 | orchestrator | 2025-09-17 01:10:49 - testbed-node-5 2025-09-17 01:10:49.955918 | orchestrator | 2025-09-17 01:10:49 - testbed-node-2 2025-09-17 01:10:50.038801 | orchestrator | 2025-09-17 01:10:50 - clean up keypairs 2025-09-17 01:10:50.058488 | orchestrator | 2025-09-17 01:10:50 - testbed 2025-09-17 01:10:50.082714 | orchestrator | 2025-09-17 01:10:50 - wait for servers to be gone 2025-09-17 01:11:00.947676 | orchestrator | 2025-09-17 01:11:00 - clean up ports 2025-09-17 01:11:01.136849 | orchestrator | 2025-09-17 01:11:01 - 14638f58-0b3c-417f-a69d-555e47566135 2025-09-17 01:11:01.436851 | orchestrator | 2025-09-17 01:11:01 - 3255d1e2-e4e5-44e0-9651-cab2c2d8f3a2 2025-09-17 01:11:01.728195 | orchestrator | 2025-09-17 01:11:01 - 443717a1-0c21-43f4-8cc6-ec3566446c08 2025-09-17 01:11:02.154598 | orchestrator | 2025-09-17 01:11:02 - 53b9d046-f9f0-4c5a-b0c7-e4c57a890f40 2025-09-17 01:11:02.360983 | orchestrator | 2025-09-17 01:11:02 - 619473f7-b947-4491-b724-181fb0445290 2025-09-17 01:11:02.640493 | orchestrator | 2025-09-17 01:11:02 - 82437d7b-7008-479f-8c2b-04c2bc52d2ec 2025-09-17 01:11:02.900573 | orchestrator | 2025-09-17 01:11:02 - e7e7fa80-f40b-41ce-9532-4aaf3685ec7a 2025-09-17 01:11:03.142985 | orchestrator | 2025-09-17 01:11:03 - clean up volumes 2025-09-17 01:11:03.299531 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-manager-base 2025-09-17 01:11:03.344023 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-2-node-base 2025-09-17 01:11:03.382554 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-4-node-base 2025-09-17 01:11:03.426604 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-0-node-base 2025-09-17 01:11:03.476897 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-1-node-base 2025-09-17 01:11:03.523048 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-5-node-base 2025-09-17 01:11:03.563965 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-3-node-base 2025-09-17 01:11:03.607093 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-5-node-5 2025-09-17 01:11:03.649459 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-8-node-5 2025-09-17 01:11:03.692947 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-6-node-3 2025-09-17 01:11:03.737386 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-2-node-5 2025-09-17 01:11:03.775853 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-4-node-4 2025-09-17 01:11:03.820171 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-1-node-4 2025-09-17 01:11:03.863344 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-7-node-4 2025-09-17 01:11:03.908175 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-3-node-3 2025-09-17 01:11:03.952024 | orchestrator | 2025-09-17 01:11:03 - testbed-volume-0-node-3 2025-09-17 01:11:04.000673 | orchestrator | 2025-09-17 01:11:04 - disconnect routers 2025-09-17 01:11:04.115885 | orchestrator | 2025-09-17 01:11:04 - testbed 2025-09-17 01:11:05.278442 | orchestrator | 2025-09-17 01:11:05 - clean up subnets 2025-09-17 01:11:05.318005 | orchestrator | 2025-09-17 01:11:05 - subnet-testbed-management 2025-09-17 01:11:05.480630 | orchestrator | 2025-09-17 01:11:05 - clean up networks 2025-09-17 01:11:05.672026 | orchestrator | 2025-09-17 01:11:05 - net-testbed-management 2025-09-17 01:11:05.965833 | orchestrator | 2025-09-17 01:11:05 - clean up security groups 2025-09-17 01:11:06.009609 | orchestrator | 2025-09-17 01:11:06 - testbed-management 2025-09-17 01:11:06.120059 | orchestrator | 2025-09-17 01:11:06 - testbed-node 2025-09-17 01:11:06.219699 | orchestrator | 2025-09-17 01:11:06 - clean up floating ips 2025-09-17 01:11:06.251443 | orchestrator | 2025-09-17 01:11:06 - 81.163.193.183 2025-09-17 01:11:06.589851 | orchestrator | 2025-09-17 01:11:06 - clean up routers 2025-09-17 01:11:06.689265 | orchestrator | 2025-09-17 01:11:06 - testbed 2025-09-17 01:11:07.719453 | orchestrator | ok: Runtime: 0:00:19.097917 2025-09-17 01:11:07.723735 | 2025-09-17 01:11:07.723949 | PLAY RECAP 2025-09-17 01:11:07.724101 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-17 01:11:07.724187 | 2025-09-17 01:11:07.867947 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-17 01:11:07.870430 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-17 01:11:08.575954 | 2025-09-17 01:11:08.576107 | PLAY [Cleanup play] 2025-09-17 01:11:08.591619 | 2025-09-17 01:11:08.591735 | TASK [Set cloud fact (Zuul deployment)] 2025-09-17 01:11:08.646765 | orchestrator | ok 2025-09-17 01:11:08.655944 | 2025-09-17 01:11:08.656076 | TASK [Set cloud fact (local deployment)] 2025-09-17 01:11:08.690376 | orchestrator | skipping: Conditional result was False 2025-09-17 01:11:08.700681 | 2025-09-17 01:11:08.700796 | TASK [Clean the cloud environment] 2025-09-17 01:11:09.810439 | orchestrator | 2025-09-17 01:11:09 - clean up servers 2025-09-17 01:11:10.289848 | orchestrator | 2025-09-17 01:11:10 - clean up keypairs 2025-09-17 01:11:10.307557 | orchestrator | 2025-09-17 01:11:10 - wait for servers to be gone 2025-09-17 01:11:10.356652 | orchestrator | 2025-09-17 01:11:10 - clean up ports 2025-09-17 01:11:10.431626 | orchestrator | 2025-09-17 01:11:10 - clean up volumes 2025-09-17 01:11:10.490296 | orchestrator | 2025-09-17 01:11:10 - disconnect routers 2025-09-17 01:11:10.520499 | orchestrator | 2025-09-17 01:11:10 - clean up subnets 2025-09-17 01:11:10.540563 | orchestrator | 2025-09-17 01:11:10 - clean up networks 2025-09-17 01:11:10.659519 | orchestrator | 2025-09-17 01:11:10 - clean up security groups 2025-09-17 01:11:10.694347 | orchestrator | 2025-09-17 01:11:10 - clean up floating ips 2025-09-17 01:11:10.718710 | orchestrator | 2025-09-17 01:11:10 - clean up routers 2025-09-17 01:11:11.235592 | orchestrator | ok: Runtime: 0:00:01.276312 2025-09-17 01:11:11.239511 | 2025-09-17 01:11:11.239683 | PLAY RECAP 2025-09-17 01:11:11.239806 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-17 01:11:11.239867 | 2025-09-17 01:11:11.364132 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-17 01:11:11.365205 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-17 01:11:12.090206 | 2025-09-17 01:11:12.090378 | PLAY [Base post-fetch] 2025-09-17 01:11:12.105857 | 2025-09-17 01:11:12.105991 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-17 01:11:12.161314 | orchestrator | skipping: Conditional result was False 2025-09-17 01:11:12.170670 | 2025-09-17 01:11:12.170821 | TASK [fetch-output : Set log path for single node] 2025-09-17 01:11:12.206306 | orchestrator | ok 2025-09-17 01:11:12.214140 | 2025-09-17 01:11:12.214334 | LOOP [fetch-output : Ensure local output dirs] 2025-09-17 01:11:12.686123 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/dedceda2b2d442e38d351229c1f15473/work/logs" 2025-09-17 01:11:12.950183 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dedceda2b2d442e38d351229c1f15473/work/artifacts" 2025-09-17 01:11:13.220556 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dedceda2b2d442e38d351229c1f15473/work/docs" 2025-09-17 01:11:13.235813 | 2025-09-17 01:11:13.235939 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-17 01:11:14.145936 | orchestrator | changed: .d..t...... ./ 2025-09-17 01:11:14.146365 | orchestrator | changed: All items complete 2025-09-17 01:11:14.146448 | 2025-09-17 01:11:14.858440 | orchestrator | changed: .d..t...... ./ 2025-09-17 01:11:15.576893 | orchestrator | changed: .d..t...... ./ 2025-09-17 01:11:15.592923 | 2025-09-17 01:11:15.593093 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-17 01:11:15.631596 | orchestrator | skipping: Conditional result was False 2025-09-17 01:11:15.636313 | orchestrator | skipping: Conditional result was False 2025-09-17 01:11:15.655434 | 2025-09-17 01:11:15.655568 | PLAY RECAP 2025-09-17 01:11:15.655653 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-17 01:11:15.655696 | 2025-09-17 01:11:15.792332 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-17 01:11:15.794668 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-17 01:11:16.505664 | 2025-09-17 01:11:16.505819 | PLAY [Base post] 2025-09-17 01:11:16.520057 | 2025-09-17 01:11:16.520179 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-17 01:11:17.429622 | orchestrator | changed 2025-09-17 01:11:17.440064 | 2025-09-17 01:11:17.440222 | PLAY RECAP 2025-09-17 01:11:17.440330 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-17 01:11:17.440420 | 2025-09-17 01:11:17.563582 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-17 01:11:17.564610 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-17 01:11:18.332994 | 2025-09-17 01:11:18.333159 | PLAY [Base post-logs] 2025-09-17 01:11:18.343677 | 2025-09-17 01:11:18.343812 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-17 01:11:18.797723 | localhost | changed 2025-09-17 01:11:18.814785 | 2025-09-17 01:11:18.815013 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-17 01:11:18.853615 | localhost | ok 2025-09-17 01:11:18.860762 | 2025-09-17 01:11:18.860937 | TASK [Set zuul-log-path fact] 2025-09-17 01:11:18.879013 | localhost | ok 2025-09-17 01:11:18.892035 | 2025-09-17 01:11:18.892169 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-17 01:11:18.929567 | localhost | ok 2025-09-17 01:11:18.935714 | 2025-09-17 01:11:18.935864 | TASK [upload-logs : Create log directories] 2025-09-17 01:11:19.442543 | localhost | changed 2025-09-17 01:11:19.447117 | 2025-09-17 01:11:19.447293 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-17 01:11:19.937791 | localhost -> localhost | ok: Runtime: 0:00:00.005467 2025-09-17 01:11:19.941987 | 2025-09-17 01:11:19.942108 | TASK [upload-logs : Upload logs to log server] 2025-09-17 01:11:20.507076 | localhost | Output suppressed because no_log was given 2025-09-17 01:11:20.511159 | 2025-09-17 01:11:20.511357 | LOOP [upload-logs : Compress console log and json output] 2025-09-17 01:11:20.568120 | localhost | skipping: Conditional result was False 2025-09-17 01:11:20.572820 | localhost | skipping: Conditional result was False 2025-09-17 01:11:20.585610 | 2025-09-17 01:11:20.585826 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-17 01:11:20.632299 | localhost | skipping: Conditional result was False 2025-09-17 01:11:20.632794 | 2025-09-17 01:11:20.636665 | localhost | skipping: Conditional result was False 2025-09-17 01:11:20.649729 | 2025-09-17 01:11:20.649964 | LOOP [upload-logs : Upload console log and json output]