2025-06-03 14:48:35.703175 | Job console starting 2025-06-03 14:48:35.718656 | Updating git repos 2025-06-03 14:48:35.800576 | Cloning repos into workspace 2025-06-03 14:48:35.980516 | Restoring repo states 2025-06-03 14:48:36.006565 | Merging changes 2025-06-03 14:48:36.006603 | Checking out repos 2025-06-03 14:48:36.279172 | Preparing playbooks 2025-06-03 14:48:37.072059 | Running Ansible setup 2025-06-03 14:48:41.769847 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-03 14:48:42.872574 | 2025-06-03 14:48:42.872789 | PLAY [Base pre] 2025-06-03 14:48:42.891240 | 2025-06-03 14:48:42.891405 | TASK [Setup log path fact] 2025-06-03 14:48:42.922524 | orchestrator | ok 2025-06-03 14:48:42.941857 | 2025-06-03 14:48:42.942064 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-03 14:48:42.983256 | orchestrator | ok 2025-06-03 14:48:42.999019 | 2025-06-03 14:48:42.999163 | TASK [emit-job-header : Print job information] 2025-06-03 14:48:43.071672 | # Job Information 2025-06-03 14:48:43.071940 | Ansible Version: 2.16.14 2025-06-03 14:48:43.072003 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-06-03 14:48:43.072046 | Pipeline: post 2025-06-03 14:48:43.072075 | Executor: 521e9411259a 2025-06-03 14:48:43.072097 | Triggered by: https://github.com/osism/testbed/commit/2740c665ca20c3108db7cd16a109674d122adad4 2025-06-03 14:48:43.072119 | Event ID: d01af34c-4089-11f0-9a3a-077cdaad19d4 2025-06-03 14:48:43.094089 | 2025-06-03 14:48:43.094244 | LOOP [emit-job-header : Print node information] 2025-06-03 14:48:43.227041 | orchestrator | ok: 2025-06-03 14:48:43.227330 | orchestrator | # Node Information 2025-06-03 14:48:43.227370 | orchestrator | Inventory Hostname: orchestrator 2025-06-03 14:48:43.227396 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-03 14:48:43.227418 | orchestrator | Username: zuul-testbed03 2025-06-03 14:48:43.227438 | orchestrator | Distro: Debian 12.11 2025-06-03 14:48:43.227467 | orchestrator | Provider: static-testbed 2025-06-03 14:48:43.227492 | orchestrator | Region: 2025-06-03 14:48:43.227513 | orchestrator | Label: testbed-orchestrator 2025-06-03 14:48:43.227532 | orchestrator | Product Name: OpenStack Nova 2025-06-03 14:48:43.227552 | orchestrator | Interface IP: 81.163.193.140 2025-06-03 14:48:43.246240 | 2025-06-03 14:48:43.246386 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-03 14:48:44.013662 | orchestrator -> localhost | changed 2025-06-03 14:48:44.023352 | 2025-06-03 14:48:44.023534 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-03 14:48:45.330792 | orchestrator -> localhost | changed 2025-06-03 14:48:45.344908 | 2025-06-03 14:48:45.345029 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-03 14:48:45.905916 | orchestrator -> localhost | ok 2025-06-03 14:48:45.914823 | 2025-06-03 14:48:45.914993 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-03 14:48:45.946344 | orchestrator | ok 2025-06-03 14:48:45.969739 | orchestrator | included: /var/lib/zuul/builds/a7d7e7a961564eaa8d9118892ef2c194/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-03 14:48:45.979906 | 2025-06-03 14:48:45.980045 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-03 14:48:47.584352 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-03 14:48:47.584706 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a7d7e7a961564eaa8d9118892ef2c194/work/a7d7e7a961564eaa8d9118892ef2c194_id_rsa 2025-06-03 14:48:47.584795 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a7d7e7a961564eaa8d9118892ef2c194/work/a7d7e7a961564eaa8d9118892ef2c194_id_rsa.pub 2025-06-03 14:48:47.584846 | orchestrator -> localhost | The key fingerprint is: 2025-06-03 14:48:47.584896 | orchestrator -> localhost | SHA256:6iKHKIau0BLWuN8HthZk0UkHbPN0C/miPSIJWkZWswY zuul-build-sshkey 2025-06-03 14:48:47.584942 | orchestrator -> localhost | The key's randomart image is: 2025-06-03 14:48:47.585191 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-03 14:48:47.585290 | orchestrator -> localhost | | E.o+oo.. | 2025-06-03 14:48:47.585339 | orchestrator -> localhost | | o..o*.+ . | 2025-06-03 14:48:47.585383 | orchestrator -> localhost | | o oo + + . | 2025-06-03 14:48:47.585527 | orchestrator -> localhost | | o+.o o o | 2025-06-03 14:48:47.585571 | orchestrator -> localhost | |.o+.+ . S . | 2025-06-03 14:48:47.586376 | orchestrator -> localhost | |.+. * + o | 2025-06-03 14:48:47.586470 | orchestrator -> localhost | |+o... * . . | 2025-06-03 14:48:47.586522 | orchestrator -> localhost | |=o+ ++ . | 2025-06-03 14:48:47.586565 | orchestrator -> localhost | |*. +.oo | 2025-06-03 14:48:47.586605 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-03 14:48:47.586693 | orchestrator -> localhost | ok: Runtime: 0:00:00.707329 2025-06-03 14:48:47.605086 | 2025-06-03 14:48:47.605188 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-03 14:48:47.637885 | orchestrator | ok 2025-06-03 14:48:47.652654 | orchestrator | included: /var/lib/zuul/builds/a7d7e7a961564eaa8d9118892ef2c194/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-03 14:48:47.669541 | 2025-06-03 14:48:47.669623 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-03 14:48:47.692884 | orchestrator | skipping: Conditional result was False 2025-06-03 14:48:47.699572 | 2025-06-03 14:48:47.699660 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-03 14:48:48.903902 | orchestrator | changed 2025-06-03 14:48:48.915970 | 2025-06-03 14:48:48.916120 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-03 14:48:49.215140 | orchestrator | ok 2025-06-03 14:48:49.234780 | 2025-06-03 14:48:49.235025 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-03 14:48:49.626874 | orchestrator | ok 2025-06-03 14:48:49.632880 | 2025-06-03 14:48:49.632984 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-03 14:48:50.359961 | orchestrator | ok 2025-06-03 14:48:50.366069 | 2025-06-03 14:48:50.366169 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-03 14:48:50.379719 | orchestrator | skipping: Conditional result was False 2025-06-03 14:48:50.386424 | 2025-06-03 14:48:50.386528 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-03 14:48:50.867290 | orchestrator -> localhost | changed 2025-06-03 14:48:50.881639 | 2025-06-03 14:48:50.881756 | TASK [add-build-sshkey : Add back temp key] 2025-06-03 14:48:51.284714 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a7d7e7a961564eaa8d9118892ef2c194/work/a7d7e7a961564eaa8d9118892ef2c194_id_rsa (zuul-build-sshkey) 2025-06-03 14:48:51.285094 | orchestrator -> localhost | ok: Runtime: 0:00:00.022942 2025-06-03 14:48:51.296318 | 2025-06-03 14:48:51.296463 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-03 14:48:51.730130 | orchestrator | ok 2025-06-03 14:48:51.739966 | 2025-06-03 14:48:51.740150 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-03 14:48:51.775536 | orchestrator | skipping: Conditional result was False 2025-06-03 14:48:51.844662 | 2025-06-03 14:48:51.844798 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-03 14:48:52.254877 | orchestrator | ok 2025-06-03 14:48:52.268844 | 2025-06-03 14:48:52.268989 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-03 14:48:52.302233 | orchestrator | ok 2025-06-03 14:48:52.311983 | 2025-06-03 14:48:52.312193 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-03 14:48:52.619689 | orchestrator -> localhost | ok 2025-06-03 14:48:52.628089 | 2025-06-03 14:48:52.628207 | TASK [validate-host : Collect information about the host] 2025-06-03 14:48:53.849066 | orchestrator | ok 2025-06-03 14:48:53.862943 | 2025-06-03 14:48:53.863106 | TASK [validate-host : Sanitize hostname] 2025-06-03 14:48:53.949392 | orchestrator | ok 2025-06-03 14:48:53.973690 | 2025-06-03 14:48:53.973892 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-03 14:48:54.587917 | orchestrator -> localhost | changed 2025-06-03 14:48:54.598079 | 2025-06-03 14:48:54.598272 | TASK [validate-host : Collect information about zuul worker] 2025-06-03 14:48:55.026112 | orchestrator | ok 2025-06-03 14:48:55.066212 | 2025-06-03 14:48:55.066363 | TASK [validate-host : Write out all zuul information for each host] 2025-06-03 14:48:55.747738 | orchestrator -> localhost | changed 2025-06-03 14:48:55.758858 | 2025-06-03 14:48:55.758980 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-03 14:48:56.070971 | orchestrator | ok 2025-06-03 14:48:56.077363 | 2025-06-03 14:48:56.077482 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-03 14:49:44.591443 | orchestrator | changed: 2025-06-03 14:49:44.591684 | orchestrator | .d..t...... src/ 2025-06-03 14:49:44.591728 | orchestrator | .d..t...... src/github.com/ 2025-06-03 14:49:44.591755 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-03 14:49:44.591777 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-03 14:49:44.591797 | orchestrator | RedHat.yml 2025-06-03 14:49:44.602581 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-03 14:49:44.602599 | orchestrator | RedHat.yml 2025-06-03 14:49:44.602652 | orchestrator | = 2.2.0"... 2025-06-03 14:49:57.926095 | orchestrator | 14:49:57.925 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-03 14:49:57.998543 | orchestrator | 14:49:57.998 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-06-03 14:49:59.057876 | orchestrator | 14:49:59.057 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-03 14:50:00.024178 | orchestrator | 14:50:00.023 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-03 14:50:00.996622 | orchestrator | 14:50:00.996 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-03 14:50:02.124465 | orchestrator | 14:50:02.124 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-03 14:50:03.081942 | orchestrator | 14:50:03.081 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-03 14:50:03.934003 | orchestrator | 14:50:03.933 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-03 14:50:03.934273 | orchestrator | 14:50:03.934 STDOUT terraform: Providers are signed by their developers. 2025-06-03 14:50:03.934322 | orchestrator | 14:50:03.934 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-03 14:50:03.934362 | orchestrator | 14:50:03.934 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-03 14:50:03.936781 | orchestrator | 14:50:03.936 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-03 14:50:03.936830 | orchestrator | 14:50:03.936 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-03 14:50:03.936866 | orchestrator | 14:50:03.936 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-03 14:50:03.936886 | orchestrator | 14:50:03.936 STDOUT terraform: you run "tofu init" in the future. 2025-06-03 14:50:03.938342 | orchestrator | 14:50:03.937 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-03 14:50:03.938405 | orchestrator | 14:50:03.937 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-03 14:50:03.938415 | orchestrator | 14:50:03.937 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-03 14:50:03.938423 | orchestrator | 14:50:03.937 STDOUT terraform: should now work. 2025-06-03 14:50:03.938431 | orchestrator | 14:50:03.937 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-03 14:50:03.938439 | orchestrator | 14:50:03.938 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-03 14:50:03.938446 | orchestrator | 14:50:03.938 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-03 14:50:04.148179 | orchestrator | 14:50:04.148 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-03 14:50:04.347326 | orchestrator | 14:50:04.347 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-03 14:50:04.347358 | orchestrator | 14:50:04.347 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-03 14:50:04.347409 | orchestrator | 14:50:04.347 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-03 14:50:04.347439 | orchestrator | 14:50:04.347 STDOUT terraform: for this configuration. 2025-06-03 14:50:04.538880 | orchestrator | 14:50:04.538 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-03 14:50:06.209358 | orchestrator | 14:50:06.209 STDOUT terraform: ci.auto.tfvars 2025-06-03 14:50:07.134347 | orchestrator | 14:50:07.134 STDOUT terraform: default_custom.tf 2025-06-03 14:50:07.336620 | orchestrator | 14:50:07.336 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-03 14:50:08.180381 | orchestrator | 14:50:08.180 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-03 14:50:09.242634 | orchestrator | 14:50:09.242 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-03 14:50:09.428434 | orchestrator | 14:50:09.428 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-03 14:50:09.428567 | orchestrator | 14:50:09.428 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-03 14:50:09.428663 | orchestrator | 14:50:09.428 STDOUT terraform:  + create 2025-06-03 14:50:09.428700 | orchestrator | 14:50:09.428 STDOUT terraform:  <= read (data resources) 2025-06-03 14:50:09.428791 | orchestrator | 14:50:09.428 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-03 14:50:09.428942 | orchestrator | 14:50:09.428 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-03 14:50:09.429013 | orchestrator | 14:50:09.428 STDOUT terraform:  # (config refers to values not yet known) 2025-06-03 14:50:09.429100 | orchestrator | 14:50:09.429 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-03 14:50:09.429185 | orchestrator | 14:50:09.429 STDOUT terraform:  + checksum = (known after apply) 2025-06-03 14:50:09.429326 | orchestrator | 14:50:09.429 STDOUT terraform:  + created_at = (known after apply) 2025-06-03 14:50:09.429407 | orchestrator | 14:50:09.429 STDOUT terraform:  + file = (known after apply) 2025-06-03 14:50:09.429491 | orchestrator | 14:50:09.429 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.429575 | orchestrator | 14:50:09.429 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.429653 | orchestrator | 14:50:09.429 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-03 14:50:09.429733 | orchestrator | 14:50:09.429 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-03 14:50:09.429786 | orchestrator | 14:50:09.429 STDOUT terraform:  + most_recent = true 2025-06-03 14:50:09.429866 | orchestrator | 14:50:09.429 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:50:09.429943 | orchestrator | 14:50:09.429 STDOUT terraform:  + protected = (known after apply) 2025-06-03 14:50:09.430027 | orchestrator | 14:50:09.429 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.430144 | orchestrator | 14:50:09.430 STDOUT terraform:  + schema = (known after apply) 2025-06-03 14:50:09.430225 | orchestrator | 14:50:09.430 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-03 14:50:09.430364 | orchestrator | 14:50:09.430 STDOUT terraform:  + tags = (known after apply) 2025-06-03 14:50:09.430445 | orchestrator | 14:50:09.430 STDOUT terraform:  + updated_at = (known after apply) 2025-06-03 14:50:09.430484 | orchestrator | 14:50:09.430 STDOUT terraform:  } 2025-06-03 14:50:09.430625 | orchestrator | 14:50:09.430 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-03 14:50:09.430705 | orchestrator | 14:50:09.430 STDOUT terraform:  # (config refers to values not yet known) 2025-06-03 14:50:09.430835 | orchestrator | 14:50:09.430 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-03 14:50:09.430910 | orchestrator | 14:50:09.430 STDOUT terraform:  + checksum = (known after apply) 2025-06-03 14:50:09.430984 | orchestrator | 14:50:09.430 STDOUT terraform:  + created_at = (known after apply) 2025-06-03 14:50:09.431063 | orchestrator | 14:50:09.430 STDOUT terraform:  + file = (known after apply) 2025-06-03 14:50:09.431141 | orchestrator | 14:50:09.431 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.431218 | orchestrator | 14:50:09.431 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.431350 | orchestrator | 14:50:09.431 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-03 14:50:09.431432 | orchestrator | 14:50:09.431 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-03 14:50:09.431490 | orchestrator | 14:50:09.431 STDOUT terraform:  + most_recent = true 2025-06-03 14:50:09.431572 | orchestrator | 14:50:09.431 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:50:09.431648 | orchestrator | 14:50:09.431 STDOUT terraform:  + protected = (known after apply) 2025-06-03 14:50:09.431725 | orchestrator | 14:50:09.431 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.431808 | orchestrator | 14:50:09.431 STDOUT terraform:  + schema = (known after apply) 2025-06-03 14:50:09.431892 | orchestrator | 14:50:09.431 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-03 14:50:09.431961 | orchestrator | 14:50:09.431 STDOUT terraform:  + tags = (known after apply) 2025-06-03 14:50:09.432038 | orchestrator | 14:50:09.431 STDOUT terraform:  + updated_at = (known after apply) 2025-06-03 14:50:09.432074 | orchestrator | 14:50:09.432 STDOUT terraform:  } 2025-06-03 14:50:09.432272 | orchestrator | 14:50:09.432 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-03 14:50:09.432345 | orchestrator | 14:50:09.432 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-03 14:50:09.432449 | orchestrator | 14:50:09.432 STDOUT terraform:  + content = (known after apply) 2025-06-03 14:50:09.432546 | orchestrator | 14:50:09.432 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-03 14:50:09.432643 | orchestrator | 14:50:09.432 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-03 14:50:09.432740 | orchestrator | 14:50:09.432 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-03 14:50:09.432866 | orchestrator | 14:50:09.432 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-03 14:50:09.432937 | orchestrator | 14:50:09.432 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-03 14:50:09.433028 | orchestrator | 14:50:09.432 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-03 14:50:09.433095 | orchestrator | 14:50:09.433 STDOUT terraform:  + directory_permission = "0777" 2025-06-03 14:50:09.433164 | orchestrator | 14:50:09.433 STDOUT terraform:  + file_permission = "0644" 2025-06-03 14:50:09.433287 | orchestrator | 14:50:09.433 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-03 14:50:09.433387 | orchestrator | 14:50:09.433 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.433423 | orchestrator | 14:50:09.433 STDOUT terraform:  } 2025-06-03 14:50:09.433498 | orchestrator | 14:50:09.433 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-03 14:50:09.433567 | orchestrator | 14:50:09.433 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-03 14:50:09.433668 | orchestrator | 14:50:09.433 STDOUT terraform:  + content = (known after apply) 2025-06-03 14:50:09.433764 | orchestrator | 14:50:09.433 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-03 14:50:09.433859 | orchestrator | 14:50:09.433 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-03 14:50:09.433957 | orchestrator | 14:50:09.433 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-03 14:50:09.434080 | orchestrator | 14:50:09.433 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-03 14:50:09.434173 | orchestrator | 14:50:09.434 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-03 14:50:09.434333 | orchestrator | 14:50:09.434 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-03 14:50:09.434404 | orchestrator | 14:50:09.434 STDOUT terraform:  + directory_permission = "0777" 2025-06-03 14:50:09.434473 | orchestrator | 14:50:09.434 STDOUT terraform:  + file_permission = "0644" 2025-06-03 14:50:09.434611 | orchestrator | 14:50:09.434 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-03 14:50:09.434760 | orchestrator | 14:50:09.434 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.434800 | orchestrator | 14:50:09.434 STDOUT terraform:  } 2025-06-03 14:50:09.434864 | orchestrator | 14:50:09.434 STDOUT terraform:  # local_file.inventory will be created 2025-06-03 14:50:09.434925 | orchestrator | 14:50:09.434 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-03 14:50:09.435013 | orchestrator | 14:50:09.434 STDOUT terraform:  + content = (known after apply) 2025-06-03 14:50:09.435097 | orchestrator | 14:50:09.435 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-03 14:50:09.435182 | orchestrator | 14:50:09.435 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-03 14:50:09.435310 | orchestrator | 14:50:09.435 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-03 14:50:09.435416 | orchestrator | 14:50:09.435 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-03 14:50:09.435555 | orchestrator | 14:50:09.435 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-03 14:50:09.435650 | orchestrator | 14:50:09.435 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-03 14:50:09.435712 | orchestrator | 14:50:09.435 STDOUT terraform:  + directory_permission = "0777" 2025-06-03 14:50:09.435772 | orchestrator | 14:50:09.435 STDOUT terraform:  + file_permission = "0644" 2025-06-03 14:50:09.435846 | orchestrator | 14:50:09.435 STDOUT terraform:  + filename = "inventory.ci" 2025-06-03 14:50:09.435933 | orchestrator | 14:50:09.435 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.435966 | orchestrator | 14:50:09.435 STDOUT terraform:  } 2025-06-03 14:50:09.436042 | orchestrator | 14:50:09.435 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-03 14:50:09.436113 | orchestrator | 14:50:09.436 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-03 14:50:09.436187 | orchestrator | 14:50:09.436 STDOUT terraform:  + content = (sensitive value) 2025-06-03 14:50:09.436292 | orchestrator | 14:50:09.436 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-03 14:50:09.436378 | orchestrator | 14:50:09.436 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-03 14:50:09.436462 | orchestrator | 14:50:09.436 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-03 14:50:09.436550 | orchestrator | 14:50:09.436 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-03 14:50:09.436635 | orchestrator | 14:50:09.436 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-03 14:50:09.436722 | orchestrator | 14:50:09.436 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-03 14:50:09.436783 | orchestrator | 14:50:09.436 STDOUT terraform:  + directory_permission = "0700" 2025-06-03 14:50:09.436842 | orchestrator | 14:50:09.436 STDOUT terraform:  + file_permission = "0600" 2025-06-03 14:50:09.436920 | orchestrator | 14:50:09.436 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-03 14:50:09.437005 | orchestrator | 14:50:09.436 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.437036 | orchestrator | 14:50:09.437 STDOUT terraform:  } 2025-06-03 14:50:09.437108 | orchestrator | 14:50:09.437 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-03 14:50:09.437180 | orchestrator | 14:50:09.437 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-03 14:50:09.437229 | orchestrator | 14:50:09.437 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.437277 | orchestrator | 14:50:09.437 STDOUT terraform:  } 2025-06-03 14:50:09.437398 | orchestrator | 14:50:09.437 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-03 14:50:09.437513 | orchestrator | 14:50:09.437 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-03 14:50:09.437600 | orchestrator | 14:50:09.437 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.437659 | orchestrator | 14:50:09.437 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.437748 | orchestrator | 14:50:09.437 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.437835 | orchestrator | 14:50:09.437 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.437921 | orchestrator | 14:50:09.437 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.438057 | orchestrator | 14:50:09.437 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-03 14:50:09.438146 | orchestrator | 14:50:09.438 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.438196 | orchestrator | 14:50:09.438 STDOUT terraform:  + size = 80 2025-06-03 14:50:09.438303 | orchestrator | 14:50:09.438 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.438368 | orchestrator | 14:50:09.438 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.438397 | orchestrator | 14:50:09.438 STDOUT terraform:  } 2025-06-03 14:50:09.438492 | orchestrator | 14:50:09.438 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-03 14:50:09.438586 | orchestrator | 14:50:09.438 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:50:09.438658 | orchestrator | 14:50:09.438 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.438706 | orchestrator | 14:50:09.438 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.438780 | orchestrator | 14:50:09.438 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.438850 | orchestrator | 14:50:09.438 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.438922 | orchestrator | 14:50:09.438 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.439011 | orchestrator | 14:50:09.438 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-03 14:50:09.439086 | orchestrator | 14:50:09.439 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.439129 | orchestrator | 14:50:09.439 STDOUT terraform:  + size = 80 2025-06-03 14:50:09.439177 | orchestrator | 14:50:09.439 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.439224 | orchestrator | 14:50:09.439 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.439265 | orchestrator | 14:50:09.439 STDOUT terraform:  } 2025-06-03 14:50:09.439360 | orchestrator | 14:50:09.439 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-03 14:50:09.439451 | orchestrator | 14:50:09.439 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:50:09.439536 | orchestrator | 14:50:09.439 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.439586 | orchestrator | 14:50:09.439 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.439658 | orchestrator | 14:50:09.439 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.439727 | orchestrator | 14:50:09.439 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.439796 | orchestrator | 14:50:09.439 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.439885 | orchestrator | 14:50:09.439 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-03 14:50:09.439957 | orchestrator | 14:50:09.439 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.440001 | orchestrator | 14:50:09.439 STDOUT terraform:  + size = 80 2025-06-03 14:50:09.440050 | orchestrator | 14:50:09.439 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.440098 | orchestrator | 14:50:09.440 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.440125 | orchestrator | 14:50:09.440 STDOUT terraform:  } 2025-06-03 14:50:09.440217 | orchestrator | 14:50:09.440 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-03 14:50:09.440331 | orchestrator | 14:50:09.440 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:50:09.440394 | orchestrator | 14:50:09.440 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.440442 | orchestrator | 14:50:09.440 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.440519 | orchestrator | 14:50:09.440 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.440587 | orchestrator | 14:50:09.440 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.440657 | orchestrator | 14:50:09.440 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.440746 | orchestrator | 14:50:09.440 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-03 14:50:09.440820 | orchestrator | 14:50:09.440 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.440866 | orchestrator | 14:50:09.440 STDOUT terraform:  + size = 80 2025-06-03 14:50:09.440916 | orchestrator | 14:50:09.440 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.440966 | orchestrator | 14:50:09.440 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.440990 | orchestrator | 14:50:09.440 STDOUT terraform:  } 2025-06-03 14:50:09.441084 | orchestrator | 14:50:09.440 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-03 14:50:09.441175 | orchestrator | 14:50:09.441 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:50:09.441277 | orchestrator | 14:50:09.441 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.441327 | orchestrator | 14:50:09.441 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.441400 | orchestrator | 14:50:09.441 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.441471 | orchestrator | 14:50:09.441 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.441542 | orchestrator | 14:50:09.441 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.441632 | orchestrator | 14:50:09.441 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-03 14:50:09.441704 | orchestrator | 14:50:09.441 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.441746 | orchestrator | 14:50:09.441 STDOUT terraform:  + size = 80 2025-06-03 14:50:09.441795 | orchestrator | 14:50:09.441 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.441837 | orchestrator | 14:50:09.441 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.441861 | orchestrator | 14:50:09.441 STDOUT terraform:  } 2025-06-03 14:50:09.441941 | orchestrator | 14:50:09.441 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-03 14:50:09.442022 | orchestrator | 14:50:09.441 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:50:09.442098 | orchestrator | 14:50:09.442 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.442139 | orchestrator | 14:50:09.442 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.442201 | orchestrator | 14:50:09.442 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.442275 | orchestrator | 14:50:09.442 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.442333 | orchestrator | 14:50:09.442 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.442411 | orchestrator | 14:50:09.442 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-03 14:50:09.442469 | orchestrator | 14:50:09.442 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.442505 | orchestrator | 14:50:09.442 STDOUT terraform:  + size = 80 2025-06-03 14:50:09.442549 | orchestrator | 14:50:09.442 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.442586 | orchestrator | 14:50:09.442 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.442611 | orchestrator | 14:50:09.442 STDOUT terraform:  } 2025-06-03 14:50:09.442690 | orchestrator | 14:50:09.442 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-03 14:50:09.442794 | orchestrator | 14:50:09.442 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:50:09.442854 | orchestrator | 14:50:09.442 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.442896 | orchestrator | 14:50:09.442 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.442960 | orchestrator | 14:50:09.442 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.443021 | orchestrator | 14:50:09.442 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.443084 | orchestrator | 14:50:09.443 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.443160 | orchestrator | 14:50:09.443 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-03 14:50:09.443221 | orchestrator | 14:50:09.443 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.443296 | orchestrator | 14:50:09.443 STDOUT terraform:  + size = 80 2025-06-03 14:50:09.443330 | orchestrator | 14:50:09.443 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.443371 | orchestrator | 14:50:09.443 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.443394 | orchestrator | 14:50:09.443 STDOUT terraform:  } 2025-06-03 14:50:09.443470 | orchestrator | 14:50:09.443 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-03 14:50:09.443540 | orchestrator | 14:50:09.443 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:50:09.443598 | orchestrator | 14:50:09.443 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.443636 | orchestrator | 14:50:09.443 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.443695 | orchestrator | 14:50:09.443 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.443751 | orchestrator | 14:50:09.443 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.443843 | orchestrator | 14:50:09.443 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-03 14:50:09.443930 | orchestrator | 14:50:09.443 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.443965 | orchestrator | 14:50:09.443 STDOUT terraform:  + size = 20 2025-06-03 14:50:09.444004 | orchestrator | 14:50:09.443 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.444044 | orchestrator | 14:50:09.443 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.444067 | orchestrator | 14:50:09.444 STDOUT terraform:  } 2025-06-03 14:50:09.444138 | orchestrator | 14:50:09.444 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-03 14:50:09.444210 | orchestrator | 14:50:09.444 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:50:09.444285 | orchestrator | 14:50:09.444 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.444323 | orchestrator | 14:50:09.444 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.444381 | orchestrator | 14:50:09.444 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.444439 | orchestrator | 14:50:09.444 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.444501 | orchestrator | 14:50:09.444 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-03 14:50:09.444558 | orchestrator | 14:50:09.444 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.444592 | orchestrator | 14:50:09.444 STDOUT terraform:  + size = 20 2025-06-03 14:50:09.444630 | orchestrator | 14:50:09.444 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.444669 | orchestrator | 14:50:09.444 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.444686 | orchestrator | 14:50:09.444 STDOUT terraform:  } 2025-06-03 14:50:09.444756 | orchestrator | 14:50:09.444 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-03 14:50:09.444825 | orchestrator | 14:50:09.444 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:50:09.444881 | orchestrator | 14:50:09.444 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.444921 | orchestrator | 14:50:09.444 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.444980 | orchestrator | 14:50:09.444 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.445037 | orchestrator | 14:50:09.444 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.445100 | orchestrator | 14:50:09.445 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-03 14:50:09.445157 | orchestrator | 14:50:09.445 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.445191 | orchestrator | 14:50:09.445 STDOUT terraform:  + size = 20 2025-06-03 14:50:09.445266 | orchestrator | 14:50:09.445 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.445332 | orchestrator | 14:50:09.445 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.445354 | orchestrator | 14:50:09.445 STDOUT terraform:  } 2025-06-03 14:50:09.445426 | orchestrator | 14:50:09.445 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-03 14:50:09.445494 | orchestrator | 14:50:09.445 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:50:09.445550 | orchestrator | 14:50:09.445 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.445587 | orchestrator | 14:50:09.445 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.445644 | orchestrator | 14:50:09.445 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.445702 | orchestrator | 14:50:09.445 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.445768 | orchestrator | 14:50:09.445 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-03 14:50:09.445824 | orchestrator | 14:50:09.445 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.445860 | orchestrator | 14:50:09.445 STDOUT terraform:  + size = 20 2025-06-03 14:50:09.445903 | orchestrator | 14:50:09.445 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.445937 | orchestrator | 14:50:09.445 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.445958 | orchestrator | 14:50:09.445 STDOUT terraform:  } 2025-06-03 14:50:09.446063 | orchestrator | 14:50:09.445 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-03 14:50:09.446132 | orchestrator | 14:50:09.446 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:50:09.446191 | orchestrator | 14:50:09.446 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.446230 | orchestrator | 14:50:09.446 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.446335 | orchestrator | 14:50:09.446 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.446393 | orchestrator | 14:50:09.446 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.446453 | orchestrator | 14:50:09.446 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-03 14:50:09.446512 | orchestrator | 14:50:09.446 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.446545 | orchestrator | 14:50:09.446 STDOUT terraform:  + size = 20 2025-06-03 14:50:09.446585 | orchestrator | 14:50:09.446 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.446624 | orchestrator | 14:50:09.446 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.446638 | orchestrator | 14:50:09.446 STDOUT terraform:  } 2025-06-03 14:50:09.446707 | orchestrator | 14:50:09.446 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-03 14:50:09.446767 | orchestrator | 14:50:09.446 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:50:09.446817 | orchestrator | 14:50:09.446 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.446850 | orchestrator | 14:50:09.446 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.446901 | orchestrator | 14:50:09.446 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.446951 | orchestrator | 14:50:09.446 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.447004 | orchestrator | 14:50:09.446 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-03 14:50:09.447055 | orchestrator | 14:50:09.447 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.447085 | orchestrator | 14:50:09.447 STDOUT terraform:  + size = 20 2025-06-03 14:50:09.447119 | orchestrator | 14:50:09.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.447166 | orchestrator | 14:50:09.447 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.447186 | orchestrator | 14:50:09.447 STDOUT terraform:  } 2025-06-03 14:50:09.447270 | orchestrator | 14:50:09.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-03 14:50:09.447321 | orchestrator | 14:50:09.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:50:09.447371 | orchestrator | 14:50:09.447 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.447405 | orchestrator | 14:50:09.447 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.447456 | orchestrator | 14:50:09.447 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.447506 | orchestrator | 14:50:09.447 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.447559 | orchestrator | 14:50:09.447 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-03 14:50:09.447609 | orchestrator | 14:50:09.447 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.447638 | orchestrator | 14:50:09.447 STDOUT terraform:  + size = 20 2025-06-03 14:50:09.447681 | orchestrator | 14:50:09.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.447708 | orchestrator | 14:50:09.447 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.447726 | orchestrator | 14:50:09.447 STDOUT terraform:  } 2025-06-03 14:50:09.447789 | orchestrator | 14:50:09.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-03 14:50:09.447848 | orchestrator | 14:50:09.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:50:09.447898 | orchestrator | 14:50:09.447 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.447932 | orchestrator | 14:50:09.447 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.447984 | orchestrator | 14:50:09.447 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.448034 | orchestrator | 14:50:09.447 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.448089 | orchestrator | 14:50:09.448 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-03 14:50:09.448138 | orchestrator | 14:50:09.448 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.448167 | orchestrator | 14:50:09.448 STDOUT terraform:  + size = 20 2025-06-03 14:50:09.448201 | orchestrator | 14:50:09.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.448263 | orchestrator | 14:50:09.448 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.448270 | orchestrator | 14:50:09.448 STDOUT terraform:  } 2025-06-03 14:50:09.448320 | orchestrator | 14:50:09.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-03 14:50:09.448381 | orchestrator | 14:50:09.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:50:09.448432 | orchestrator | 14:50:09.448 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:50:09.448468 | orchestrator | 14:50:09.448 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.448518 | orchestrator | 14:50:09.448 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.448569 | orchestrator | 14:50:09.448 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:50:09.448622 | orchestrator | 14:50:09.448 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-03 14:50:09.448670 | orchestrator | 14:50:09.448 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.448699 | orchestrator | 14:50:09.448 STDOUT terraform:  + size = 20 2025-06-03 14:50:09.448734 | orchestrator | 14:50:09.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:50:09.448767 | orchestrator | 14:50:09.448 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:50:09.448787 | orchestrator | 14:50:09.448 STDOUT terraform:  } 2025-06-03 14:50:09.448849 | orchestrator | 14:50:09.448 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-03 14:50:09.448908 | orchestrator | 14:50:09.448 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-03 14:50:09.448961 | orchestrator | 14:50:09.448 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:50:09.449012 | orchestrator | 14:50:09.448 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:50:09.449060 | orchestrator | 14:50:09.449 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:50:09.449109 | orchestrator | 14:50:09.449 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.449143 | orchestrator | 14:50:09.449 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.449172 | orchestrator | 14:50:09.449 STDOUT terraform:  + config_drive = true 2025-06-03 14:50:09.449230 | orchestrator | 14:50:09.449 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:50:09.449289 | orchestrator | 14:50:09.449 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:50:09.449332 | orchestrator | 14:50:09.449 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-03 14:50:09.449364 | orchestrator | 14:50:09.449 STDOUT terraform:  + force_delete = false 2025-06-03 14:50:09.449411 | orchestrator | 14:50:09.449 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:50:09.449462 | orchestrator | 14:50:09.449 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.449515 | orchestrator | 14:50:09.449 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.449559 | orchestrator | 14:50:09.449 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:50:09.449595 | orchestrator | 14:50:09.449 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:50:09.449638 | orchestrator | 14:50:09.449 STDOUT terraform:  + name = "testbed-manager" 2025-06-03 14:50:09.449673 | orchestrator | 14:50:09.449 STDOUT terraform:  + power_state = "active" 2025-06-03 14:50:09.449722 | orchestrator | 14:50:09.449 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.449771 | orchestrator | 14:50:09.449 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:50:09.449804 | orchestrator | 14:50:09.449 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:50:09.449853 | orchestrator | 14:50:09.449 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:50:09.449902 | orchestrator | 14:50:09.449 STDOUT terraform:  + user_data = (known after apply) 2025-06-03 14:50:09.449926 | orchestrator | 14:50:09.449 STDOUT terraform:  + block_device { 2025-06-03 14:50:09.449960 | orchestrator | 14:50:09.449 STDOUT terraform:  + boot_index = 0 2025-06-03 14:50:09.450001 | orchestrator | 14:50:09.449 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:50:09.450059 | orchestrator | 14:50:09.449 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:50:09.450097 | orchestrator | 14:50:09.450 STDOUT terraform:  + multiattach = false 2025-06-03 14:50:09.450139 | orchestrator | 14:50:09.450 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:50:09.450270 | orchestrator | 14:50:09.450 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.450293 | orchestrator | 14:50:09.450 STDOUT terraform:  } 2025-06-03 14:50:09.450316 | orchestrator | 14:50:09.450 STDOUT terraform:  + network { 2025-06-03 14:50:09.450346 | orchestrator | 14:50:09.450 STDOUT terraform:  + access_network = false 2025-06-03 14:50:09.450390 | orchestrator | 14:50:09.450 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:50:09.450429 | orchestrator | 14:50:09.450 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:50:09.450469 | orchestrator | 14:50:09.450 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:50:09.450508 | orchestrator | 14:50:09.450 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:50:09.450552 | orchestrator | 14:50:09.450 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:50:09.450591 | orchestrator | 14:50:09.450 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.450602 | orchestrator | 14:50:09.450 STDOUT terraform:  } 2025-06-03 14:50:09.450621 | orchestrator | 14:50:09.450 STDOUT terraform:  } 2025-06-03 14:50:09.450675 | orchestrator | 14:50:09.450 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-03 14:50:09.450727 | orchestrator | 14:50:09.450 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:50:09.450771 | orchestrator | 14:50:09.450 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:50:09.450814 | orchestrator | 14:50:09.450 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:50:09.450860 | orchestrator | 14:50:09.450 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:50:09.450905 | orchestrator | 14:50:09.450 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.450934 | orchestrator | 14:50:09.450 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.450961 | orchestrator | 14:50:09.450 STDOUT terraform:  + config_drive = true 2025-06-03 14:50:09.451005 | orchestrator | 14:50:09.450 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:50:09.451049 | orchestrator | 14:50:09.451 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:50:09.451086 | orchestrator | 14:50:09.451 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:50:09.451116 | orchestrator | 14:50:09.451 STDOUT terraform:  + force_delete = false 2025-06-03 14:50:09.451158 | orchestrator | 14:50:09.451 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:50:09.451203 | orchestrator | 14:50:09.451 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.451272 | orchestrator | 14:50:09.451 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.451323 | orchestrator | 14:50:09.451 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:50:09.451355 | orchestrator | 14:50:09.451 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:50:09.451391 | orchestrator | 14:50:09.451 STDOUT terraform:  + name = "testbed-node-0" 2025-06-03 14:50:09.451422 | orchestrator | 14:50:09.451 STDOUT terraform:  + power_state = "active" 2025-06-03 14:50:09.451473 | orchestrator | 14:50:09.451 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.451509 | orchestrator | 14:50:09.451 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:50:09.451538 | orchestrator | 14:50:09.451 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:50:09.451582 | orchestrator | 14:50:09.451 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:50:09.451649 | orchestrator | 14:50:09.451 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:50:09.451667 | orchestrator | 14:50:09.451 STDOUT terraform:  + block_device { 2025-06-03 14:50:09.451697 | orchestrator | 14:50:09.451 STDOUT terraform:  + boot_index = 0 2025-06-03 14:50:09.451732 | orchestrator | 14:50:09.451 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:50:09.451769 | orchestrator | 14:50:09.451 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:50:09.451804 | orchestrator | 14:50:09.451 STDOUT terraform:  + multiattach = false 2025-06-03 14:50:09.451841 | orchestrator | 14:50:09.451 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:50:09.451890 | orchestrator | 14:50:09.451 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.451897 | orchestrator | 14:50:09.451 STDOUT terraform:  } 2025-06-03 14:50:09.451924 | orchestrator | 14:50:09.451 STDOUT terraform:  + network { 2025-06-03 14:50:09.451952 | orchestrator | 14:50:09.451 STDOUT terraform:  + access_network = false 2025-06-03 14:50:09.451991 | orchestrator | 14:50:09.451 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:50:09.452030 | orchestrator | 14:50:09.451 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:50:09.452069 | orchestrator | 14:50:09.452 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:50:09.452108 | orchestrator | 14:50:09.452 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:50:09.452149 | orchestrator | 14:50:09.452 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:50:09.452187 | orchestrator | 14:50:09.452 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.452193 | orchestrator | 14:50:09.452 STDOUT terraform:  } 2025-06-03 14:50:09.452217 | orchestrator | 14:50:09.452 STDOUT terraform:  } 2025-06-03 14:50:09.452281 | orchestrator | 14:50:09.452 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-03 14:50:09.452334 | orchestrator | 14:50:09.452 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:50:09.452378 | orchestrator | 14:50:09.452 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:50:09.452423 | orchestrator | 14:50:09.452 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:50:09.452466 | orchestrator | 14:50:09.452 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:50:09.452510 | orchestrator | 14:50:09.452 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.452539 | orchestrator | 14:50:09.452 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.452566 | orchestrator | 14:50:09.452 STDOUT terraform:  + config_drive = true 2025-06-03 14:50:09.452610 | orchestrator | 14:50:09.452 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:50:09.452654 | orchestrator | 14:50:09.452 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:50:09.452691 | orchestrator | 14:50:09.452 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:50:09.452720 | orchestrator | 14:50:09.452 STDOUT terraform:  + force_delete = false 2025-06-03 14:50:09.452765 | orchestrator | 14:50:09.452 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:50:09.452809 | orchestrator | 14:50:09.452 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.452855 | orchestrator | 14:50:09.452 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.452900 | orchestrator | 14:50:09.452 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:50:09.452921 | orchestrator | 14:50:09.452 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:50:09.452965 | orchestrator | 14:50:09.452 STDOUT terraform:  + name = "testbed-node-1" 2025-06-03 14:50:09.452997 | orchestrator | 14:50:09.452 STDOUT terraform:  + power_state = "active" 2025-06-03 14:50:09.453042 | orchestrator | 14:50:09.452 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.453086 | orchestrator | 14:50:09.453 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:50:09.453116 | orchestrator | 14:50:09.453 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:50:09.453160 | orchestrator | 14:50:09.453 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:50:09.453223 | orchestrator | 14:50:09.453 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:50:09.453258 | orchestrator | 14:50:09.453 STDOUT terraform:  + block_device { 2025-06-03 14:50:09.453294 | orchestrator | 14:50:09.453 STDOUT terraform:  + boot_index = 0 2025-06-03 14:50:09.453328 | orchestrator | 14:50:09.453 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:50:09.453365 | orchestrator | 14:50:09.453 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:50:09.453401 | orchestrator | 14:50:09.453 STDOUT terraform:  + multiattach = false 2025-06-03 14:50:09.453441 | orchestrator | 14:50:09.453 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:50:09.453486 | orchestrator | 14:50:09.453 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.453502 | orchestrator | 14:50:09.453 STDOUT terraform:  } 2025-06-03 14:50:09.453508 | orchestrator | 14:50:09.453 STDOUT terraform:  + network { 2025-06-03 14:50:09.453540 | orchestrator | 14:50:09.453 STDOUT terraform:  + access_network = false 2025-06-03 14:50:09.453578 | orchestrator | 14:50:09.453 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:50:09.453614 | orchestrator | 14:50:09.453 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:50:09.453652 | orchestrator | 14:50:09.453 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:50:09.453689 | orchestrator | 14:50:09.453 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:50:09.453727 | orchestrator | 14:50:09.453 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:50:09.453765 | orchestrator | 14:50:09.453 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.453771 | orchestrator | 14:50:09.453 STDOUT terraform:  } 2025-06-03 14:50:09.453794 | orchestrator | 14:50:09.453 STDOUT terraform:  } 2025-06-03 14:50:09.453845 | orchestrator | 14:50:09.453 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-03 14:50:09.453894 | orchestrator | 14:50:09.453 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:50:09.453936 | orchestrator | 14:50:09.453 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:50:09.453977 | orchestrator | 14:50:09.453 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:50:09.454085 | orchestrator | 14:50:09.453 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:50:09.454113 | orchestrator | 14:50:09.454 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.454153 | orchestrator | 14:50:09.454 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.454185 | orchestrator | 14:50:09.454 STDOUT terraform:  + config_drive = true 2025-06-03 14:50:09.454223 | orchestrator | 14:50:09.454 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:50:09.454279 | orchestrator | 14:50:09.454 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:50:09.454312 | orchestrator | 14:50:09.454 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:50:09.454341 | orchestrator | 14:50:09.454 STDOUT terraform:  + force_delete = false 2025-06-03 14:50:09.454382 | orchestrator | 14:50:09.454 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:50:09.454425 | orchestrator | 14:50:09.454 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.454463 | orchestrator | 14:50:09.454 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.454510 | orchestrator | 14:50:09.454 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:50:09.454538 | orchestrator | 14:50:09.454 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:50:09.454576 | orchestrator | 14:50:09.454 STDOUT terraform:  + name = "testbed-node-2" 2025-06-03 14:50:09.454605 | orchestrator | 14:50:09.454 STDOUT terraform:  + power_state = "active" 2025-06-03 14:50:09.454650 | orchestrator | 14:50:09.454 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.454692 | orchestrator | 14:50:09.454 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:50:09.454711 | orchestrator | 14:50:09.454 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:50:09.454758 | orchestrator | 14:50:09.454 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:50:09.454814 | orchestrator | 14:50:09.454 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:50:09.454824 | orchestrator | 14:50:09.454 STDOUT terraform:  + block_device { 2025-06-03 14:50:09.454859 | orchestrator | 14:50:09.454 STDOUT terraform:  + boot_index = 0 2025-06-03 14:50:09.454891 | orchestrator | 14:50:09.454 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:50:09.454928 | orchestrator | 14:50:09.454 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:50:09.454961 | orchestrator | 14:50:09.454 STDOUT terraform:  + multiattach = false 2025-06-03 14:50:09.454996 | orchestrator | 14:50:09.454 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:50:09.455041 | orchestrator | 14:50:09.454 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.455051 | orchestrator | 14:50:09.455 STDOUT terraform:  } 2025-06-03 14:50:09.455935 | orchestrator | 14:50:09.455 STDOUT terraform:  + network { 2025-06-03 14:50:09.455961 | orchestrator | 14:50:09.455 STDOUT terraform:  + access_network = false 2025-06-03 14:50:09.455978 | orchestrator | 14:50:09.455 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:50:09.455995 | orchestrator | 14:50:09.455 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:50:09.456050 | orchestrator | 14:50:09.455 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:50:09.456058 | orchestrator | 14:50:09.456 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:50:09.456104 | orchestrator | 14:50:09.456 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:50:09.456131 | orchestrator | 14:50:09.456 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.456138 | orchestrator | 14:50:09.456 STDOUT terraform:  } 2025-06-03 14:50:09.456154 | orchestrator | 14:50:09.456 STDOUT terraform:  } 2025-06-03 14:50:09.456248 | orchestrator | 14:50:09.456 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-03 14:50:09.456258 | orchestrator | 14:50:09.456 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:50:09.456314 | orchestrator | 14:50:09.456 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:50:09.456353 | orchestrator | 14:50:09.456 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:50:09.456399 | orchestrator | 14:50:09.456 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:50:09.456431 | orchestrator | 14:50:09.456 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.456457 | orchestrator | 14:50:09.456 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.456481 | orchestrator | 14:50:09.456 STDOUT terraform:  + config_drive = true 2025-06-03 14:50:09.456520 | orchestrator | 14:50:09.456 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:50:09.456559 | orchestrator | 14:50:09.456 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:50:09.456592 | orchestrator | 14:50:09.456 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:50:09.456617 | orchestrator | 14:50:09.456 STDOUT terraform:  + force_delete = false 2025-06-03 14:50:09.456654 | orchestrator | 14:50:09.456 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:50:09.456710 | orchestrator | 14:50:09.456 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.456764 | orchestrator | 14:50:09.456 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.456819 | orchestrator | 14:50:09.456 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:50:09.456853 | orchestrator | 14:50:09.456 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:50:09.456888 | orchestrator | 14:50:09.456 STDOUT terraform:  + name = "testbed-node-3" 2025-06-03 14:50:09.456911 | orchestrator | 14:50:09.456 STDOUT terraform:  + power_state = "active" 2025-06-03 14:50:09.456947 | orchestrator | 14:50:09.456 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.456985 | orchestrator | 14:50:09.456 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:50:09.457010 | orchestrator | 14:50:09.456 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:50:09.457053 | orchestrator | 14:50:09.457 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:50:09.457106 | orchestrator | 14:50:09.457 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:50:09.457123 | orchestrator | 14:50:09.457 STDOUT terraform:  + block_device { 2025-06-03 14:50:09.457156 | orchestrator | 14:50:09.457 STDOUT terraform:  + boot_index = 0 2025-06-03 14:50:09.457205 | orchestrator | 14:50:09.457 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:50:09.457257 | orchestrator | 14:50:09.457 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:50:09.457275 | orchestrator | 14:50:09.457 STDOUT terraform:  + multiattach = false 2025-06-03 14:50:09.457308 | orchestrator | 14:50:09.457 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:50:09.457351 | orchestrator | 14:50:09.457 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.457357 | orchestrator | 14:50:09.457 STDOUT terraform:  } 2025-06-03 14:50:09.457380 | orchestrator | 14:50:09.457 STDOUT terraform:  + network { 2025-06-03 14:50:09.457404 | orchestrator | 14:50:09.457 STDOUT terraform:  + access_network = false 2025-06-03 14:50:09.457438 | orchestrator | 14:50:09.457 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:50:09.457471 | orchestrator | 14:50:09.457 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:50:09.457506 | orchestrator | 14:50:09.457 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:50:09.457540 | orchestrator | 14:50:09.457 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:50:09.457575 | orchestrator | 14:50:09.457 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:50:09.457609 | orchestrator | 14:50:09.457 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.457615 | orchestrator | 14:50:09.457 STDOUT terraform:  } 2025-06-03 14:50:09.457630 | orchestrator | 14:50:09.457 STDOUT terraform:  } 2025-06-03 14:50:09.457678 | orchestrator | 14:50:09.457 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-03 14:50:09.457723 | orchestrator | 14:50:09.457 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:50:09.457766 | orchestrator | 14:50:09.457 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:50:09.457802 | orchestrator | 14:50:09.457 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:50:09.457841 | orchestrator | 14:50:09.457 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:50:09.457877 | orchestrator | 14:50:09.457 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.457902 | orchestrator | 14:50:09.457 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.457924 | orchestrator | 14:50:09.457 STDOUT terraform:  + config_drive = true 2025-06-03 14:50:09.457962 | orchestrator | 14:50:09.457 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:50:09.458000 | orchestrator | 14:50:09.457 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:50:09.458113 | orchestrator | 14:50:09.457 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:50:09.458132 | orchestrator | 14:50:09.458 STDOUT terraform:  + force_delete = false 2025-06-03 14:50:09.458170 | orchestrator | 14:50:09.458 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:50:09.458209 | orchestrator | 14:50:09.458 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.458275 | orchestrator | 14:50:09.458 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.458315 | orchestrator | 14:50:09.458 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:50:09.458343 | orchestrator | 14:50:09.458 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:50:09.458381 | orchestrator | 14:50:09.458 STDOUT terraform:  + name = "testbed-node-4" 2025-06-03 14:50:09.458409 | orchestrator | 14:50:09.458 STDOUT terraform:  + power_state = "active" 2025-06-03 14:50:09.458445 | orchestrator | 14:50:09.458 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.458481 | orchestrator | 14:50:09.458 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:50:09.458508 | orchestrator | 14:50:09.458 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:50:09.458547 | orchestrator | 14:50:09.458 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:50:09.458600 | orchestrator | 14:50:09.458 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:50:09.458617 | orchestrator | 14:50:09.458 STDOUT terraform:  + block_device { 2025-06-03 14:50:09.458641 | orchestrator | 14:50:09.458 STDOUT terraform:  + boot_index = 0 2025-06-03 14:50:09.458668 | orchestrator | 14:50:09.458 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:50:09.458700 | orchestrator | 14:50:09.458 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:50:09.458733 | orchestrator | 14:50:09.458 STDOUT terraform:  + multiattach = false 2025-06-03 14:50:09.458755 | orchestrator | 14:50:09.458 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:50:09.458794 | orchestrator | 14:50:09.458 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.458810 | orchestrator | 14:50:09.458 STDOUT terraform:  } 2025-06-03 14:50:09.458826 | orchestrator | 14:50:09.458 STDOUT terraform:  + network { 2025-06-03 14:50:09.458847 | orchestrator | 14:50:09.458 STDOUT terraform:  + access_network = false 2025-06-03 14:50:09.458881 | orchestrator | 14:50:09.458 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:50:09.458910 | orchestrator | 14:50:09.458 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:50:09.458941 | orchestrator | 14:50:09.458 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:50:09.458972 | orchestrator | 14:50:09.458 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:50:09.459003 | orchestrator | 14:50:09.458 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:50:09.459034 | orchestrator | 14:50:09.458 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.459044 | orchestrator | 14:50:09.459 STDOUT terraform:  } 2025-06-03 14:50:09.459052 | orchestrator | 14:50:09.459 STDOUT terraform:  } 2025-06-03 14:50:09.459099 | orchestrator | 14:50:09.459 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-03 14:50:09.459139 | orchestrator | 14:50:09.459 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:50:09.459175 | orchestrator | 14:50:09.459 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:50:09.459209 | orchestrator | 14:50:09.459 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:50:09.459253 | orchestrator | 14:50:09.459 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:50:09.459287 | orchestrator | 14:50:09.459 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.459310 | orchestrator | 14:50:09.459 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:50:09.459332 | orchestrator | 14:50:09.459 STDOUT terraform:  + config_drive = true 2025-06-03 14:50:09.459373 | orchestrator | 14:50:09.459 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:50:09.459427 | orchestrator | 14:50:09.459 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:50:09.459458 | orchestrator | 14:50:09.459 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:50:09.459481 | orchestrator | 14:50:09.459 STDOUT terraform:  + force_delete = false 2025-06-03 14:50:09.459516 | orchestrator | 14:50:09.459 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:50:09.459551 | orchestrator | 14:50:09.459 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.459589 | orchestrator | 14:50:09.459 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:50:09.459621 | orchestrator | 14:50:09.459 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:50:09.459646 | orchestrator | 14:50:09.459 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:50:09.459678 | orchestrator | 14:50:09.459 STDOUT terraform:  + name = "testbed-node-5" 2025-06-03 14:50:09.459704 | orchestrator | 14:50:09.459 STDOUT terraform:  + power_state = "active" 2025-06-03 14:50:09.459740 | orchestrator | 14:50:09.459 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.459775 | orchestrator | 14:50:09.459 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:50:09.459798 | orchestrator | 14:50:09.459 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:50:09.459834 | orchestrator | 14:50:09.459 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:50:09.459883 | orchestrator | 14:50:09.459 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:50:09.459900 | orchestrator | 14:50:09.459 STDOUT terraform:  + block_device { 2025-06-03 14:50:09.459924 | orchestrator | 14:50:09.459 STDOUT terraform:  + boot_index = 0 2025-06-03 14:50:09.459953 | orchestrator | 14:50:09.459 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:50:09.459983 | orchestrator | 14:50:09.459 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:50:09.460011 | orchestrator | 14:50:09.459 STDOUT terraform:  + multiattach = false 2025-06-03 14:50:09.460040 | orchestrator | 14:50:09.460 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:50:09.460078 | orchestrator | 14:50:09.460 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.460084 | orchestrator | 14:50:09.460 STDOUT terraform:  } 2025-06-03 14:50:09.460103 | orchestrator | 14:50:09.460 STDOUT terraform:  + network { 2025-06-03 14:50:09.460124 | orchestrator | 14:50:09.460 STDOUT terraform:  + access_network = false 2025-06-03 14:50:09.460155 | orchestrator | 14:50:09.460 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:50:09.460185 | orchestrator | 14:50:09.460 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:50:09.460216 | orchestrator | 14:50:09.460 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:50:09.460383 | orchestrator | 14:50:09.460 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:50:09.460473 | orchestrator | 14:50:09.460 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:50:09.460488 | orchestrator | 14:50:09.460 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:50:09.460500 | orchestrator | 14:50:09.460 STDOUT terraform:  } 2025-06-03 14:50:09.460523 | orchestrator | 14:50:09.460 STDOUT terraform:  } 2025-06-03 14:50:09.460535 | orchestrator | 14:50:09.460 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-03 14:50:09.460546 | orchestrator | 14:50:09.460 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-03 14:50:09.460557 | orchestrator | 14:50:09.460 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-03 14:50:09.460568 | orchestrator | 14:50:09.460 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.460578 | orchestrator | 14:50:09.460 STDOUT terraform:  + name = "testbed" 2025-06-03 14:50:09.460589 | orchestrator | 14:50:09.460 STDOUT terraform:  + private_key = (sensitive value) 2025-06-03 14:50:09.460600 | orchestrator | 14:50:09.460 STDOUT terraform:  + public_key = (known after apply) 2025-06-03 14:50:09.460615 | orchestrator | 14:50:09.460 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.460626 | orchestrator | 14:50:09.460 STDOUT terraform:  + user_id = (known after apply) 2025-06-03 14:50:09.460637 | orchestrator | 14:50:09.460 STDOUT terraform:  } 2025-06-03 14:50:09.460652 | orchestrator | 14:50:09.460 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-03 14:50:09.460668 | orchestrator | 14:50:09.460 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:50:09.460702 | orchestrator | 14:50:09.460 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:50:09.460719 | orchestrator | 14:50:09.460 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.460763 | orchestrator | 14:50:09.460 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:50:09.460781 | orchestrator | 14:50:09.460 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.460795 | orchestrator | 14:50:09.460 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:50:09.460831 | orchestrator | 14:50:09.460 STDOUT terraform:  } 2025-06-03 14:50:09.460858 | orchestrator | 14:50:09.460 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-03 14:50:09.460926 | orchestrator | 14:50:09.460 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:50:09.460945 | orchestrator | 14:50:09.460 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:50:09.460959 | orchestrator | 14:50:09.460 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.460973 | orchestrator | 14:50:09.460 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:50:09.461014 | orchestrator | 14:50:09.460 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.461031 | orchestrator | 14:50:09.461 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:50:09.461045 | orchestrator | 14:50:09.461 STDOUT terraform:  } 2025-06-03 14:50:09.461163 | orchestrator | 14:50:09.461 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-03 14:50:09.461179 | orchestrator | 14:50:09.461 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:50:09.461204 | orchestrator | 14:50:09.461 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:50:09.461219 | orchestrator | 14:50:09.461 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.461230 | orchestrator | 14:50:09.461 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:50:09.461310 | orchestrator | 14:50:09.461 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.461322 | orchestrator | 14:50:09.461 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:50:09.461333 | orchestrator | 14:50:09.461 STDOUT terraform:  } 2025-06-03 14:50:09.461348 | orchestrator | 14:50:09.461 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-03 14:50:09.461390 | orchestrator | 14:50:09.461 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:50:09.461406 | orchestrator | 14:50:09.461 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:50:09.461421 | orchestrator | 14:50:09.461 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.461459 | orchestrator | 14:50:09.461 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:50:09.461475 | orchestrator | 14:50:09.461 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.461512 | orchestrator | 14:50:09.461 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:50:09.461524 | orchestrator | 14:50:09.461 STDOUT terraform:  } 2025-06-03 14:50:09.461562 | orchestrator | 14:50:09.461 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-03 14:50:09.461615 | orchestrator | 14:50:09.461 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:50:09.461632 | orchestrator | 14:50:09.461 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:50:09.461648 | orchestrator | 14:50:09.461 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.461673 | orchestrator | 14:50:09.461 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:50:09.461712 | orchestrator | 14:50:09.461 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.461740 | orchestrator | 14:50:09.461 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:50:09.461755 | orchestrator | 14:50:09.461 STDOUT terraform:  } 2025-06-03 14:50:09.461792 | orchestrator | 14:50:09.461 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-03 14:50:09.461840 | orchestrator | 14:50:09.461 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:50:09.461857 | orchestrator | 14:50:09.461 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:50:09.461905 | orchestrator | 14:50:09.461 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.461918 | orchestrator | 14:50:09.461 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:50:09.461932 | orchestrator | 14:50:09.461 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.461980 | orchestrator | 14:50:09.461 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:50:09.461993 | orchestrator | 14:50:09.461 STDOUT terraform:  } 2025-06-03 14:50:09.462043 | orchestrator | 14:50:09.461 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-03 14:50:09.462096 | orchestrator | 14:50:09.462 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:50:09.462113 | orchestrator | 14:50:09.462 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:50:09.462140 | orchestrator | 14:50:09.462 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.462189 | orchestrator | 14:50:09.462 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:50:09.462201 | orchestrator | 14:50:09.462 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.462216 | orchestrator | 14:50:09.462 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:50:09.462230 | orchestrator | 14:50:09.462 STDOUT terraform:  } 2025-06-03 14:50:09.462287 | orchestrator | 14:50:09.462 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-03 14:50:09.462339 | orchestrator | 14:50:09.462 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:50:09.462356 | orchestrator | 14:50:09.462 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:50:09.462405 | orchestrator | 14:50:09.462 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.462418 | orchestrator | 14:50:09.462 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:50:09.462432 | orchestrator | 14:50:09.462 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.462480 | orchestrator | 14:50:09.462 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:50:09.462493 | orchestrator | 14:50:09.462 STDOUT terraform:  } 2025-06-03 14:50:09.462520 | orchestrator | 14:50:09.462 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-03 14:50:09.462580 | orchestrator | 14:50:09.462 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:50:09.462597 | orchestrator | 14:50:09.462 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:50:09.462645 | orchestrator | 14:50:09.462 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.462658 | orchestrator | 14:50:09.462 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:50:09.462672 | orchestrator | 14:50:09.462 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.462709 | orchestrator | 14:50:09.462 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:50:09.462722 | orchestrator | 14:50:09.462 STDOUT terraform:  } 2025-06-03 14:50:09.462771 | orchestrator | 14:50:09.462 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-03 14:50:09.462828 | orchestrator | 14:50:09.462 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-03 14:50:09.462845 | orchestrator | 14:50:09.462 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-03 14:50:09.462944 | orchestrator | 14:50:09.462 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-03 14:50:09.463022 | orchestrator | 14:50:09.462 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.463051 | orchestrator | 14:50:09.463 STDOUT terraform:  + port_id = (known after apply) 2025-06-03 14:50:09.463089 | orchestrator | 14:50:09.463 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.463101 | orchestrator | 14:50:09.463 STDOUT terraform:  } 2025-06-03 14:50:09.463156 | orchestrator | 14:50:09.463 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-03 14:50:09.463210 | orchestrator | 14:50:09.463 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-03 14:50:09.463227 | orchestrator | 14:50:09.463 STDOUT terraform:  + address = (known after apply) 2025-06-03 14:50:09.463260 | orchestrator | 14:50:09.463 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.463275 | orchestrator | 14:50:09.463 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-03 14:50:09.463313 | orchestrator | 14:50:09.463 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:50:09.463328 | orchestrator | 14:50:09.463 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-03 14:50:09.463376 | orchestrator | 14:50:09.463 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.463389 | orchestrator | 14:50:09.463 STDOUT terraform:  + pool = "public" 2025-06-03 14:50:09.463404 | orchestrator | 14:50:09.463 STDOUT terraform:  + port_id = (known after apply) 2025-06-03 14:50:09.463418 | orchestrator | 14:50:09.463 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.463439 | orchestrator | 14:50:09.463 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:50:09.463453 | orchestrator | 14:50:09.463 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.463467 | orchestrator | 14:50:09.463 STDOUT terraform:  } 2025-06-03 14:50:09.463574 | orchestrator | 14:50:09.463 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-03 14:50:09.463631 | orchestrator | 14:50:09.463 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-03 14:50:09.463648 | orchestrator | 14:50:09.463 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:50:09.463685 | orchestrator | 14:50:09.463 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.463701 | orchestrator | 14:50:09.463 STDOUT terraform:  + availability_zone_hints = [ 2025-06-03 14:50:09.463715 | orchestrator | 14:50:09.463 STDOUT terraform:  + "nova", 2025-06-03 14:50:09.463727 | orchestrator | 14:50:09.463 STDOUT terraform:  ] 2025-06-03 14:50:09.463764 | orchestrator | 14:50:09.463 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-03 14:50:09.463813 | orchestrator | 14:50:09.463 STDOUT terraform:  + external = (known after apply) 2025-06-03 14:50:09.463830 | orchestrator | 14:50:09.463 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.463878 | orchestrator | 14:50:09.463 STDOUT terraform:  + mtu = (known after apply) 2025-06-03 14:50:09.463894 | orchestrator | 14:50:09.463 STDOUT terraform:  + name = "net-testbed-management" 2025-06-03 14:50:09.463947 | orchestrator | 14:50:09.463 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:50:09.463963 | orchestrator | 14:50:09.463 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:50:09.464018 | orchestrator | 14:50:09.463 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.464034 | orchestrator | 14:50:09.463 STDOUT terraform:  + shared = (known after apply) 2025-06-03 14:50:09.464086 | orchestrator | 14:50:09.464 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.464103 | orchestrator | 14:50:09.464 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-03 14:50:09.464145 | orchestrator | 14:50:09.464 STDOUT terraform:  + segments (known after apply) 2025-06-03 14:50:09.464158 | orchestrator | 14:50:09.464 STDOUT terraform:  } 2025-06-03 14:50:09.464206 | orchestrator | 14:50:09.464 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-03 14:50:09.464223 | orchestrator | 14:50:09.464 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-03 14:50:09.464307 | orchestrator | 14:50:09.464 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:50:09.464325 | orchestrator | 14:50:09.464 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:50:09.464401 | orchestrator | 14:50:09.464 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:50:09.464415 | orchestrator | 14:50:09.464 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.464429 | orchestrator | 14:50:09.464 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:50:09.464467 | orchestrator | 14:50:09.464 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:50:09.464516 | orchestrator | 14:50:09.464 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:50:09.464532 | orchestrator | 14:50:09.464 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:50:09.464588 | orchestrator | 14:50:09.464 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.464605 | orchestrator | 14:50:09.464 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:50:09.464642 | orchestrator | 14:50:09.464 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:50:09.464691 | orchestrator | 14:50:09.464 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:50:09.464707 | orchestrator | 14:50:09.464 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:50:09.464744 | orchestrator | 14:50:09.464 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.464760 | orchestrator | 14:50:09.464 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:50:09.464815 | orchestrator | 14:50:09.464 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.464831 | orchestrator | 14:50:09.464 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.464846 | orchestrator | 14:50:09.464 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:50:09.464860 | orchestrator | 14:50:09.464 STDOUT terraform:  } 2025-06-03 14:50:09.464874 | orchestrator | 14:50:09.464 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.464897 | orchestrator | 14:50:09.464 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:50:09.464913 | orchestrator | 14:50:09.464 STDOUT terraform:  } 2025-06-03 14:50:09.464927 | orchestrator | 14:50:09.464 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:50:09.464941 | orchestrator | 14:50:09.464 STDOUT terraform:  + fixed_ip { 2025-06-03 14:50:09.464956 | orchestrator | 14:50:09.464 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-03 14:50:09.465003 | orchestrator | 14:50:09.464 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:50:09.465016 | orchestrator | 14:50:09.464 STDOUT terraform:  } 2025-06-03 14:50:09.465031 | orchestrator | 14:50:09.464 STDOUT terraform:  } 2025-06-03 14:50:09.465080 | orchestrator | 14:50:09.465 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-03 14:50:09.465097 | orchestrator | 14:50:09.465 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:50:09.465148 | orchestrator | 14:50:09.465 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:50:09.465165 | orchestrator | 14:50:09.465 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:50:09.465216 | orchestrator | 14:50:09.465 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:50:09.465232 | orchestrator | 14:50:09.465 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.465296 | orchestrator | 14:50:09.465 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:50:09.465312 | orchestrator | 14:50:09.465 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:50:09.465364 | orchestrator | 14:50:09.465 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:50:09.465389 | orchestrator | 14:50:09.465 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:50:09.465435 | orchestrator | 14:50:09.465 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.465452 | orchestrator | 14:50:09.465 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:50:09.465505 | orchestrator | 14:50:09.465 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:50:09.465521 | orchestrator | 14:50:09.465 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:50:09.465572 | orchestrator | 14:50:09.465 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:50:09.465588 | orchestrator | 14:50:09.465 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.465641 | orchestrator | 14:50:09.465 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:50:09.465691 | orchestrator | 14:50:09.465 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.465707 | orchestrator | 14:50:09.465 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.465781 | orchestrator | 14:50:09.465 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:50:09.465799 | orchestrator | 14:50:09.465 STDOUT terraform:  } 2025-06-03 14:50:09.465873 | orchestrator | 14:50:09.465 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.465957 | orchestrator | 14:50:09.465 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:50:09.465998 | orchestrator | 14:50:09.465 STDOUT terraform:  } 2025-06-03 14:50:09.466100 | orchestrator | 14:50:09.465 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.466185 | orchestrator | 14:50:09.466 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:50:09.466226 | orchestrator | 14:50:09.466 STDOUT terraform:  } 2025-06-03 14:50:09.466318 | orchestrator | 14:50:09.466 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.466405 | orchestrator | 14:50:09.466 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:50:09.466446 | orchestrator | 14:50:09.466 STDOUT terraform:  } 2025-06-03 14:50:09.466521 | orchestrator | 14:50:09.466 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:50:09.466561 | orchestrator | 14:50:09.466 STDOUT terraform:  + fixed_ip { 2025-06-03 14:50:09.466634 | orchestrator | 14:50:09.466 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-03 14:50:09.466697 | orchestrator | 14:50:09.466 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:50:09.466714 | orchestrator | 14:50:09.466 STDOUT terraform:  } 2025-06-03 14:50:09.466726 | orchestrator | 14:50:09.466 STDOUT terraform:  } 2025-06-03 14:50:09.466797 | orchestrator | 14:50:09.466 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-03 14:50:09.466849 | orchestrator | 14:50:09.466 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:50:09.466866 | orchestrator | 14:50:09.466 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:50:09.466919 | orchestrator | 14:50:09.466 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:50:09.466947 | orchestrator | 14:50:09.466 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:50:09.466992 | orchestrator | 14:50:09.466 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.467008 | orchestrator | 14:50:09.466 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:50:09.467071 | orchestrator | 14:50:09.467 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:50:09.467093 | orchestrator | 14:50:09.467 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:50:09.467141 | orchestrator | 14:50:09.467 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:50:09.467158 | orchestrator | 14:50:09.467 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.467211 | orchestrator | 14:50:09.467 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:50:09.467283 | orchestrator | 14:50:09.467 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:50:09.467297 | orchestrator | 14:50:09.467 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:50:09.467344 | orchestrator | 14:50:09.467 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:50:09.467360 | orchestrator | 14:50:09.467 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.467397 | orchestrator | 14:50:09.467 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:50:09.467436 | orchestrator | 14:50:09.467 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.467452 | orchestrator | 14:50:09.467 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.467467 | orchestrator | 14:50:09.467 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:50:09.467481 | orchestrator | 14:50:09.467 STDOUT terraform:  } 2025-06-03 14:50:09.467500 | orchestrator | 14:50:09.467 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.467523 | orchestrator | 14:50:09.467 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:50:09.467538 | orchestrator | 14:50:09.467 STDOUT terraform:  } 2025-06-03 14:50:09.467552 | orchestrator | 14:50:09.467 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.467566 | orchestrator | 14:50:09.467 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:50:09.467580 | orchestrator | 14:50:09.467 STDOUT terraform:  } 2025-06-03 14:50:09.467595 | orchestrator | 14:50:09.467 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.467633 | orchestrator | 14:50:09.467 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:50:09.467647 | orchestrator | 14:50:09.467 STDOUT terraform:  } 2025-06-03 14:50:09.467661 | orchestrator | 14:50:09.467 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:50:09.467672 | orchestrator | 14:50:09.467 STDOUT terraform:  + fixed_ip { 2025-06-03 14:50:09.467686 | orchestrator | 14:50:09.467 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-03 14:50:09.467724 | orchestrator | 14:50:09.467 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:50:09.467736 | orchestrator | 14:50:09.467 STDOUT terraform:  } 2025-06-03 14:50:09.467759 | orchestrator | 14:50:09.467 STDOUT terraform:  } 2025-06-03 14:50:09.467798 | orchestrator | 14:50:09.467 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-03 14:50:09.467848 | orchestrator | 14:50:09.467 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:50:09.467899 | orchestrator | 14:50:09.467 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:50:09.467915 | orchestrator | 14:50:09.467 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:50:09.467953 | orchestrator | 14:50:09.467 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:50:09.467977 | orchestrator | 14:50:09.467 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.468026 | orchestrator | 14:50:09.467 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:50:09.468043 | orchestrator | 14:50:09.468 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:50:09.468182 | orchestrator | 14:50:09.468 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:50:09.468223 | orchestrator | 14:50:09.468 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:50:09.468243 | orchestrator | 14:50:09.468 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.468254 | orchestrator | 14:50:09.468 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:50:09.468259 | orchestrator | 14:50:09.468 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:50:09.468264 | orchestrator | 14:50:09.468 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:50:09.468307 | orchestrator | 14:50:09.468 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:50:09.468344 | orchestrator | 14:50:09.468 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.468379 | orchestrator | 14:50:09.468 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:50:09.468415 | orchestrator | 14:50:09.468 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.468439 | orchestrator | 14:50:09.468 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.468468 | orchestrator | 14:50:09.468 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:50:09.468476 | orchestrator | 14:50:09.468 STDOUT terraform:  } 2025-06-03 14:50:09.468501 | orchestrator | 14:50:09.468 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.468534 | orchestrator | 14:50:09.468 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:50:09.468541 | orchestrator | 14:50:09.468 STDOUT terraform:  } 2025-06-03 14:50:09.468568 | orchestrator | 14:50:09.468 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.468596 | orchestrator | 14:50:09.468 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:50:09.468603 | orchestrator | 14:50:09.468 STDOUT terraform:  } 2025-06-03 14:50:09.468628 | orchestrator | 14:50:09.468 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.468659 | orchestrator | 14:50:09.468 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:50:09.468673 | orchestrator | 14:50:09.468 STDOUT terraform:  } 2025-06-03 14:50:09.468695 | orchestrator | 14:50:09.468 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:50:09.468701 | orchestrator | 14:50:09.468 STDOUT terraform:  + fixed_ip { 2025-06-03 14:50:09.468737 | orchestrator | 14:50:09.468 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-03 14:50:09.468768 | orchestrator | 14:50:09.468 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:50:09.468775 | orchestrator | 14:50:09.468 STDOUT terraform:  } 2025-06-03 14:50:09.468790 | orchestrator | 14:50:09.468 STDOUT terraform:  } 2025-06-03 14:50:09.468834 | orchestrator | 14:50:09.468 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-03 14:50:09.468879 | orchestrator | 14:50:09.468 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:50:09.468915 | orchestrator | 14:50:09.468 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:50:09.468952 | orchestrator | 14:50:09.468 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:50:09.468987 | orchestrator | 14:50:09.468 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:50:09.469023 | orchestrator | 14:50:09.468 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.469059 | orchestrator | 14:50:09.469 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:50:09.469095 | orchestrator | 14:50:09.469 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:50:09.469132 | orchestrator | 14:50:09.469 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:50:09.469168 | orchestrator | 14:50:09.469 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:50:09.469205 | orchestrator | 14:50:09.469 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.469281 | orchestrator | 14:50:09.469 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:50:09.469317 | orchestrator | 14:50:09.469 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:50:09.469354 | orchestrator | 14:50:09.469 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:50:09.469390 | orchestrator | 14:50:09.469 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:50:09.469427 | orchestrator | 14:50:09.469 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.469465 | orchestrator | 14:50:09.469 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:50:09.469502 | orchestrator | 14:50:09.469 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.469518 | orchestrator | 14:50:09.469 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.469548 | orchestrator | 14:50:09.469 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:50:09.469555 | orchestrator | 14:50:09.469 STDOUT terraform:  } 2025-06-03 14:50:09.469578 | orchestrator | 14:50:09.469 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.469608 | orchestrator | 14:50:09.469 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:50:09.469619 | orchestrator | 14:50:09.469 STDOUT terraform:  } 2025-06-03 14:50:09.469634 | orchestrator | 14:50:09.469 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.469664 | orchestrator | 14:50:09.469 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:50:09.469670 | orchestrator | 14:50:09.469 STDOUT terraform:  } 2025-06-03 14:50:09.469695 | orchestrator | 14:50:09.469 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.469724 | orchestrator | 14:50:09.469 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:50:09.469730 | orchestrator | 14:50:09.469 STDOUT terraform:  } 2025-06-03 14:50:09.469758 | orchestrator | 14:50:09.469 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:50:09.469764 | orchestrator | 14:50:09.469 STDOUT terraform:  + fixed_ip { 2025-06-03 14:50:09.469795 | orchestrator | 14:50:09.469 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-03 14:50:09.469824 | orchestrator | 14:50:09.469 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:50:09.469831 | orchestrator | 14:50:09.469 STDOUT terraform:  } 2025-06-03 14:50:09.469852 | orchestrator | 14:50:09.469 STDOUT terraform:  } 2025-06-03 14:50:09.469895 | orchestrator | 14:50:09.469 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-03 14:50:09.469940 | orchestrator | 14:50:09.469 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:50:09.469977 | orchestrator | 14:50:09.469 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:50:09.470032 | orchestrator | 14:50:09.469 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:50:09.470062 | orchestrator | 14:50:09.470 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:50:09.470104 | orchestrator | 14:50:09.470 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.470140 | orchestrator | 14:50:09.470 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:50:09.470176 | orchestrator | 14:50:09.470 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:50:09.470211 | orchestrator | 14:50:09.470 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:50:09.470350 | orchestrator | 14:50:09.470 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:50:09.470394 | orchestrator | 14:50:09.470 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.470413 | orchestrator | 14:50:09.470 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:50:09.470426 | orchestrator | 14:50:09.470 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:50:09.470437 | orchestrator | 14:50:09.470 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:50:09.470450 | orchestrator | 14:50:09.470 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:50:09.470463 | orchestrator | 14:50:09.470 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.470514 | orchestrator | 14:50:09.470 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:50:09.470540 | orchestrator | 14:50:09.470 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.470551 | orchestrator | 14:50:09.470 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.470570 | orchestrator | 14:50:09.470 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:50:09.470592 | orchestrator | 14:50:09.470 STDOUT terraform:  } 2025-06-03 14:50:09.470608 | orchestrator | 14:50:09.470 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.470629 | orchestrator | 14:50:09.470 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:50:09.470645 | orchestrator | 14:50:09.470 STDOUT terraform:  } 2025-06-03 14:50:09.470658 | orchestrator | 14:50:09.470 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.470671 | orchestrator | 14:50:09.470 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:50:09.470684 | orchestrator | 14:50:09.470 STDOUT terraform:  } 2025-06-03 14:50:09.470696 | orchestrator | 14:50:09.470 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.470720 | orchestrator | 14:50:09.470 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:50:09.470734 | orchestrator | 14:50:09.470 STDOUT terraform:  } 2025-06-03 14:50:09.470747 | orchestrator | 14:50:09.470 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:50:09.470760 | orchestrator | 14:50:09.470 STDOUT terraform:  + fixed_ip { 2025-06-03 14:50:09.470784 | orchestrator | 14:50:09.470 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-03 14:50:09.470818 | orchestrator | 14:50:09.470 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:50:09.470833 | orchestrator | 14:50:09.470 STDOUT terraform:  } 2025-06-03 14:50:09.470843 | orchestrator | 14:50:09.470 STDOUT terraform:  } 2025-06-03 14:50:09.470891 | orchestrator | 14:50:09.470 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-03 14:50:09.470935 | orchestrator | 14:50:09.470 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:50:09.470973 | orchestrator | 14:50:09.470 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:50:09.471010 | orchestrator | 14:50:09.470 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:50:09.471049 | orchestrator | 14:50:09.470 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:50:09.471085 | orchestrator | 14:50:09.471 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.471123 | orchestrator | 14:50:09.471 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:50:09.471159 | orchestrator | 14:50:09.471 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:50:09.471196 | orchestrator | 14:50:09.471 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:50:09.471304 | orchestrator | 14:50:09.471 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:50:09.471325 | orchestrator | 14:50:09.471 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.471348 | orchestrator | 14:50:09.471 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:50:09.471371 | orchestrator | 14:50:09.471 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:50:09.471385 | orchestrator | 14:50:09.471 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:50:09.471400 | orchestrator | 14:50:09.471 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:50:09.471457 | orchestrator | 14:50:09.471 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.471474 | orchestrator | 14:50:09.471 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:50:09.471524 | orchestrator | 14:50:09.471 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.471547 | orchestrator | 14:50:09.471 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.471562 | orchestrator | 14:50:09.471 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:50:09.471574 | orchestrator | 14:50:09.471 STDOUT terraform:  } 2025-06-03 14:50:09.471588 | orchestrator | 14:50:09.471 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.471602 | orchestrator | 14:50:09.471 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:50:09.471616 | orchestrator | 14:50:09.471 STDOUT terraform:  } 2025-06-03 14:50:09.471631 | orchestrator | 14:50:09.471 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.471669 | orchestrator | 14:50:09.471 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:50:09.471682 | orchestrator | 14:50:09.471 STDOUT terraform:  } 2025-06-03 14:50:09.471696 | orchestrator | 14:50:09.471 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:50:09.471710 | orchestrator | 14:50:09.471 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:50:09.471725 | orchestrator | 14:50:09.471 STDOUT terraform:  } 2025-06-03 14:50:09.471739 | orchestrator | 14:50:09.471 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:50:09.471754 | orchestrator | 14:50:09.471 STDOUT terraform:  + fixed_ip { 2025-06-03 14:50:09.471768 | orchestrator | 14:50:09.471 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-03 14:50:09.471805 | orchestrator | 14:50:09.471 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:50:09.471818 | orchestrator | 14:50:09.471 STDOUT terraform:  } 2025-06-03 14:50:09.471832 | orchestrator | 14:50:09.471 STDOUT terraform:  } 2025-06-03 14:50:09.471870 | orchestrator | 14:50:09.471 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-03 14:50:09.471922 | orchestrator | 14:50:09.471 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-03 14:50:09.471939 | orchestrator | 14:50:09.471 STDOUT terraform:  + force_destroy = false 2025-06-03 14:50:09.471953 | orchestrator | 14:50:09.471 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.471991 | orchestrator | 14:50:09.471 STDOUT terraform:  + port_id = (known after apply) 2025-06-03 14:50:09.472007 | orchestrator | 14:50:09.471 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.472031 | orchestrator | 14:50:09.471 STDOUT terraform:  + router_id = (known after apply) 2025-06-03 14:50:09.472053 | orchestrator | 14:50:09.472 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:50:09.472067 | orchestrator | 14:50:09.472 STDOUT terraform:  } 2025-06-03 14:50:09.472106 | orchestrator | 14:50:09.472 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-03 14:50:09.472156 | orchestrator | 14:50:09.472 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-03 14:50:09.472196 | orchestrator | 14:50:09.472 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:50:09.472267 | orchestrator | 14:50:09.472 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.472282 | orchestrator | 14:50:09.472 STDOUT terraform:  + availability_zone_hints = [ 2025-06-03 14:50:09.472296 | orchestrator | 14:50:09.472 STDOUT terraform:  + "nova", 2025-06-03 14:50:09.472308 | orchestrator | 14:50:09.472 STDOUT terraform:  ] 2025-06-03 14:50:09.472327 | orchestrator | 14:50:09.472 STDOUT terraform:  + distributed = (known after apply) 2025-06-03 14:50:09.472365 | orchestrator | 14:50:09.472 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-03 14:50:09.472418 | orchestrator | 14:50:09.472 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-03 14:50:09.472436 | orchestrator | 14:50:09.472 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.472483 | orchestrator | 14:50:09.472 STDOUT terraform:  + name = "testbed" 2025-06-03 14:50:09.472500 | orchestrator | 14:50:09.472 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.472553 | orchestrator | 14:50:09.472 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.472569 | orchestrator | 14:50:09.472 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-03 14:50:09.472583 | orchestrator | 14:50:09.472 STDOUT terraform:  } 2025-06-03 14:50:09.472645 | orchestrator | 14:50:09.472 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-03 14:50:09.472699 | orchestrator | 14:50:09.472 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-03 14:50:09.472715 | orchestrator | 14:50:09.472 STDOUT terraform:  + description = "ssh" 2025-06-03 14:50:09.472730 | orchestrator | 14:50:09.472 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:50:09.472744 | orchestrator | 14:50:09.472 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:50:09.472782 | orchestrator | 14:50:09.472 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.472797 | orchestrator | 14:50:09.472 STDOUT terraform:  + port_range_max = 22 2025-06-03 14:50:09.472812 | orchestrator | 14:50:09.472 STDOUT terraform:  + port_range_min = 22 2025-06-03 14:50:09.472826 | orchestrator | 14:50:09.472 STDOUT terraform:  + protocol = "tcp" 2025-06-03 14:50:09.472863 | orchestrator | 14:50:09.472 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.472879 | orchestrator | 14:50:09.472 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:50:09.472916 | orchestrator | 14:50:09.472 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:50:09.472963 | orchestrator | 14:50:09.472 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:50:09.472979 | orchestrator | 14:50:09.472 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.472994 | orchestrator | 14:50:09.472 STDOUT terraform:  } 2025-06-03 14:50:09.473056 | orchestrator | 14:50:09.472 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-03 14:50:09.473110 | orchestrator | 14:50:09.473 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-03 14:50:09.473126 | orchestrator | 14:50:09.473 STDOUT terraform:  + description = "wireguard" 2025-06-03 14:50:09.473141 | orchestrator | 14:50:09.473 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:50:09.473155 | orchestrator | 14:50:09.473 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:50:09.473192 | orchestrator | 14:50:09.473 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.473207 | orchestrator | 14:50:09.473 STDOUT terraform:  + port_range_max = 51820 2025-06-03 14:50:09.473222 | orchestrator | 14:50:09.473 STDOUT terraform:  + port_range_min = 51820 2025-06-03 14:50:09.473258 | orchestrator | 14:50:09.473 STDOUT terraform:  + protocol = "udp" 2025-06-03 14:50:09.473274 | orchestrator | 14:50:09.473 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.473312 | orchestrator | 14:50:09.473 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:50:09.473328 | orchestrator | 14:50:09.473 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:50:09.473377 | orchestrator | 14:50:09.473 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:50:09.473389 | orchestrator | 14:50:09.473 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.473422 | orchestrator | 14:50:09.473 STDOUT terraform:  } 2025-06-03 14:50:09.473437 | orchestrator | 14:50:09.473 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-03 14:50:09.473508 | orchestrator | 14:50:09.473 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-03 14:50:09.473525 | orchestrator | 14:50:09.473 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:50:09.473539 | orchestrator | 14:50:09.473 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:50:09.473593 | orchestrator | 14:50:09.473 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.473606 | orchestrator | 14:50:09.473 STDOUT terraform:  + protocol = "tcp" 2025-06-03 14:50:09.473620 | orchestrator | 14:50:09.473 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.473634 | orchestrator | 14:50:09.473 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:50:09.473671 | orchestrator | 14:50:09.473 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-03 14:50:09.473687 | orchestrator | 14:50:09.473 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:50:09.473725 | orchestrator | 14:50:09.473 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.473745 | orchestrator | 14:50:09.473 STDOUT terraform:  } 2025-06-03 14:50:09.473783 | orchestrator | 14:50:09.473 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-03 14:50:09.473838 | orchestrator | 14:50:09.473 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-03 14:50:09.473854 | orchestrator | 14:50:09.473 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:50:09.473869 | orchestrator | 14:50:09.473 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:50:09.473917 | orchestrator | 14:50:09.473 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.473930 | orchestrator | 14:50:09.473 STDOUT terraform:  + protocol = "udp" 2025-06-03 14:50:09.473944 | orchestrator | 14:50:09.473 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.473992 | orchestrator | 14:50:09.473 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:50:09.474004 | orchestrator | 14:50:09.473 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-03 14:50:09.474049 | orchestrator | 14:50:09.473 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:50:09.474066 | orchestrator | 14:50:09.474 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.474078 | orchestrator | 14:50:09.474 STDOUT terraform:  } 2025-06-03 14:50:09.474136 | orchestrator | 14:50:09.474 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-03 14:50:09.474189 | orchestrator | 14:50:09.474 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-03 14:50:09.474205 | orchestrator | 14:50:09.474 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:50:09.474219 | orchestrator | 14:50:09.474 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:50:09.474268 | orchestrator | 14:50:09.474 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.474286 | orchestrator | 14:50:09.474 STDOUT terraform:  + protocol = "icmp" 2025-06-03 14:50:09.474300 | orchestrator | 14:50:09.474 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.474339 | orchestrator | 14:50:09.474 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:50:09.474355 | orchestrator | 14:50:09.474 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:50:09.474369 | orchestrator | 14:50:09.474 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:50:09.474417 | orchestrator | 14:50:09.474 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.474431 | orchestrator | 14:50:09.474 STDOUT terraform:  } 2025-06-03 14:50:09.474478 | orchestrator | 14:50:09.474 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-03 14:50:09.474530 | orchestrator | 14:50:09.474 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-03 14:50:09.474547 | orchestrator | 14:50:09.474 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:50:09.474561 | orchestrator | 14:50:09.474 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:50:09.474640 | orchestrator | 14:50:09.474 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.474660 | orchestrator | 14:50:09.474 STDOUT terraform:  + protocol = "tcp" 2025-06-03 14:50:09.474672 | orchestrator | 14:50:09.474 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.474687 | orchestrator | 14:50:09.474 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:50:09.474698 | orchestrator | 14:50:09.474 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:50:09.474712 | orchestrator | 14:50:09.474 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:50:09.474750 | orchestrator | 14:50:09.474 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.474763 | orchestrator | 14:50:09.474 STDOUT terraform:  } 2025-06-03 14:50:09.474811 | orchestrator | 14:50:09.474 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-03 14:50:09.474860 | orchestrator | 14:50:09.474 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-03 14:50:09.474877 | orchestrator | 14:50:09.474 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:50:09.474891 | orchestrator | 14:50:09.474 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:50:09.474915 | orchestrator | 14:50:09.474 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.474931 | orchestrator | 14:50:09.474 STDOUT terraform:  + protocol = "udp" 2025-06-03 14:50:09.474968 | orchestrator | 14:50:09.474 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.474983 | orchestrator | 14:50:09.474 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:50:09.475020 | orchestrator | 14:50:09.474 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:50:09.475036 | orchestrator | 14:50:09.474 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:50:09.475073 | orchestrator | 14:50:09.475 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.475086 | orchestrator | 14:50:09.475 STDOUT terraform:  } 2025-06-03 14:50:09.475133 | orchestrator | 14:50:09.475 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-03 14:50:09.475183 | orchestrator | 14:50:09.475 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-03 14:50:09.475201 | orchestrator | 14:50:09.475 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:50:09.475216 | orchestrator | 14:50:09.475 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:50:09.475289 | orchestrator | 14:50:09.475 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.475312 | orchestrator | 14:50:09.475 STDOUT terraform:  + protocol = "icmp" 2025-06-03 14:50:09.475329 | orchestrator | 14:50:09.475 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.475343 | orchestrator | 14:50:09.475 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:50:09.475393 | orchestrator | 14:50:09.475 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:50:09.475406 | orchestrator | 14:50:09.475 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:50:09.475429 | orchestrator | 14:50:09.475 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.475441 | orchestrator | 14:50:09.475 STDOUT terraform:  } 2025-06-03 14:50:09.475567 | orchestrator | 14:50:09.475 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-03 14:50:09.475598 | orchestrator | 14:50:09.475 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-03 14:50:09.475608 | orchestrator | 14:50:09.475 STDOUT terraform:  + description = "vrrp" 2025-06-03 14:50:09.475613 | orchestrator | 14:50:09.475 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:50:09.475617 | orchestrator | 14:50:09.475 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:50:09.475623 | orchestrator | 14:50:09.475 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.475652 | orchestrator | 14:50:09.475 STDOUT terraform:  + protocol = "112" 2025-06-03 14:50:09.475669 | orchestrator | 14:50:09.475 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.475771 | orchestrator | 14:50:09.475 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:50:09.475806 | orchestrator | 14:50:09.475 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:50:09.475816 | orchestrator | 14:50:09.475 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:50:09.475831 | orchestrator | 14:50:09.475 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.475842 | orchestrator | 14:50:09.475 STDOUT terraform:  } 2025-06-03 14:50:09.475852 | orchestrator | 14:50:09.475 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-03 14:50:09.475866 | orchestrator | 14:50:09.475 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-03 14:50:09.475903 | orchestrator | 14:50:09.475 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.475939 | orchestrator | 14:50:09.475 STDOUT terraform:  + description = "management security group" 2025-06-03 14:50:09.475953 | orchestrator | 14:50:09.475 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.475986 | orchestrator | 14:50:09.475 STDOUT terraform:  + name = "testbed-management" 2025-06-03 14:50:09.476000 | orchestrator | 14:50:09.475 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.476033 | orchestrator | 14:50:09.475 STDOUT terraform:  + stateful = (known after apply) 2025-06-03 14:50:09.476048 | orchestrator | 14:50:09.476 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.476061 | orchestrator | 14:50:09.476 STDOUT terraform:  } 2025-06-03 14:50:09.476115 | orchestrator | 14:50:09.476 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-03 14:50:09.476162 | orchestrator | 14:50:09.476 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-03 14:50:09.476177 | orchestrator | 14:50:09.476 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.476211 | orchestrator | 14:50:09.476 STDOUT terraform:  + description = "node security group" 2025-06-03 14:50:09.476263 | orchestrator | 14:50:09.476 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.476280 | orchestrator | 14:50:09.476 STDOUT terraform:  + name = "testbed-node" 2025-06-03 14:50:09.476290 | orchestrator | 14:50:09.476 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.476302 | orchestrator | 14:50:09.476 STDOUT terraform:  + stateful = (known after apply) 2025-06-03 14:50:09.476338 | orchestrator | 14:50:09.476 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.476349 | orchestrator | 14:50:09.476 STDOUT terraform:  } 2025-06-03 14:50:09.476394 | orchestrator | 14:50:09.476 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-03 14:50:09.476440 | orchestrator | 14:50:09.476 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-03 14:50:09.476455 | orchestrator | 14:50:09.476 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:50:09.476499 | orchestrator | 14:50:09.476 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-03 14:50:09.476514 | orchestrator | 14:50:09.476 STDOUT terraform:  + dns_nameservers = [ 2025-06-03 14:50:09.476524 | orchestrator | 14:50:09.476 STDOUT terraform:  + "8.8.8.8", 2025-06-03 14:50:09.476537 | orchestrator | 14:50:09.476 STDOUT terraform:  + "9.9.9.9", 2025-06-03 14:50:09.476547 | orchestrator | 14:50:09.476 STDOUT terraform:  ] 2025-06-03 14:50:09.476560 | orchestrator | 14:50:09.476 STDOUT terraform:  + enable_dhcp = true 2025-06-03 14:50:09.476594 | orchestrator | 14:50:09.476 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-03 14:50:09.476609 | orchestrator | 14:50:09.476 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.476622 | orchestrator | 14:50:09.476 STDOUT terraform:  + ip_version = 4 2025-06-03 14:50:09.476666 | orchestrator | 14:50:09.476 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-03 14:50:09.476681 | orchestrator | 14:50:09.476 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-03 14:50:09.476730 | orchestrator | 14:50:09.476 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-03 14:50:09.476744 | orchestrator | 14:50:09.476 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:50:09.476787 | orchestrator | 14:50:09.476 STDOUT terraform:  + no_gateway = false 2025-06-03 14:50:09.476802 | orchestrator | 14:50:09.476 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:50:09.476815 | orchestrator | 14:50:09.476 STDOUT terraform:  + service_types = (known after apply) 2025-06-03 14:50:09.476860 | orchestrator | 14:50:09.476 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:50:09.476875 | orchestrator | 14:50:09.476 STDOUT terraform:  + allocation_pool { 2025-06-03 14:50:09.476888 | orchestrator | 14:50:09.476 STDOUT terraform:  + end = "192.168.31.250" 2025-06-03 14:50:09.476900 | orchestrator | 14:50:09.476 STDOUT terraform:  + start = "192.168.31.200" 2025-06-03 14:50:09.476913 | orchestrator | 14:50:09.476 STDOUT terraform:  } 2025-06-03 14:50:09.476926 | orchestrator | 14:50:09.476 STDOUT terraform:  } 2025-06-03 14:50:09.476945 | orchestrator | 14:50:09.476 STDOUT terraform:  # terraform_data.image will be created 2025-06-03 14:50:09.477027 | orchestrator | 14:50:09.476 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-03 14:50:09.477041 | orchestrator | 14:50:09.476 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.477054 | orchestrator | 14:50:09.476 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-03 14:50:09.477064 | orchestrator | 14:50:09.477 STDOUT terraform:  + output = (known after apply) 2025-06-03 14:50:09.477074 | orchestrator | 14:50:09.477 STDOUT terraform:  } 2025-06-03 14:50:09.477087 | orchestrator | 14:50:09.477 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-03 14:50:09.477097 | orchestrator | 14:50:09.477 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-03 14:50:09.477110 | orchestrator | 14:50:09.477 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:50:09.477122 | orchestrator | 14:50:09.477 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-03 14:50:09.477135 | orchestrator | 14:50:09.477 STDOUT terraform:  + output = (known after apply) 2025-06-03 14:50:09.477147 | orchestrator | 14:50:09.477 STDOUT terraform:  } 2025-06-03 14:50:09.477182 | orchestrator | 14:50:09.477 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-03 14:50:09.477193 | orchestrator | 14:50:09.477 STDOUT terraform: Changes to Outputs: 2025-06-03 14:50:09.477207 | orchestrator | 14:50:09.477 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-03 14:50:09.477268 | orchestrator | 14:50:09.477 STDOUT terraform:  + private_key = (sensitive value) 2025-06-03 14:50:09.694413 | orchestrator | 14:50:09.694 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-03 14:50:09.695991 | orchestrator | 14:50:09.695 STDOUT terraform: terraform_data.image: Creating... 2025-06-03 14:50:09.696663 | orchestrator | 14:50:09.696 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=b752f0bc-6113-dda3-6c1a-ab0208c72fcd] 2025-06-03 14:50:09.697260 | orchestrator | 14:50:09.697 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=1ee0d86a-7d26-aa3d-76da-ebb0bfb72ce1] 2025-06-03 14:50:09.716168 | orchestrator | 14:50:09.715 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-03 14:50:09.716277 | orchestrator | 14:50:09.715 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-03 14:50:09.739582 | orchestrator | 14:50:09.739 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-03 14:50:09.740199 | orchestrator | 14:50:09.740 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-03 14:50:09.740368 | orchestrator | 14:50:09.740 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-03 14:50:09.740993 | orchestrator | 14:50:09.740 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-03 14:50:09.741279 | orchestrator | 14:50:09.741 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-03 14:50:09.741571 | orchestrator | 14:50:09.741 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-03 14:50:09.741796 | orchestrator | 14:50:09.741 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-03 14:50:09.752847 | orchestrator | 14:50:09.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-03 14:50:10.279639 | orchestrator | 14:50:10.279 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-03 14:50:10.405163 | orchestrator | 14:50:10.287 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-03 14:50:10.405232 | orchestrator | 14:50:10.294 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-03 14:50:10.405274 | orchestrator | 14:50:10.301 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-03 14:50:10.405288 | orchestrator | 14:50:10.357 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-06-03 14:50:10.405299 | orchestrator | 14:50:10.365 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-03 14:50:15.870593 | orchestrator | 14:50:15.870 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=94a8340d-0536-42dd-9eba-92d360910514] 2025-06-03 14:50:15.880525 | orchestrator | 14:50:15.880 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-03 14:50:19.738754 | orchestrator | 14:50:19.738 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-03 14:50:19.742073 | orchestrator | 14:50:19.741 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-03 14:50:19.742184 | orchestrator | 14:50:19.741 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-03 14:50:19.742332 | orchestrator | 14:50:19.742 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-03 14:50:19.743086 | orchestrator | 14:50:19.742 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-03 14:50:19.754463 | orchestrator | 14:50:19.754 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-03 14:50:20.288403 | orchestrator | 14:50:20.288 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-03 14:50:20.490341 | orchestrator | 14:50:20.302 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-03 14:50:20.490387 | orchestrator | 14:50:20.366 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-03 14:50:20.490395 | orchestrator | 14:50:20.452 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=fdccfd9d-7310-474c-a0d9-9edfc2c702c2] 2025-06-03 14:50:20.490402 | orchestrator | 14:50:20.456 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=2cdbec4e-06c4-422d-9c10-82dc5d1a2447] 2025-06-03 14:50:20.490407 | orchestrator | 14:50:20.470 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-03 14:50:20.490413 | orchestrator | 14:50:20.474 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-03 14:50:20.490418 | orchestrator | 14:50:20.477 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=6fb9127004b99d49706f24a38e73bd5cf4ec8287] 2025-06-03 14:50:20.490423 | orchestrator | 14:50:20.482 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=ed26131c-3f0f-451a-b8c2-bbd32b81be35] 2025-06-03 14:50:20.490438 | orchestrator | 14:50:20.484 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=8933e5be-3d9f-49f8-8e64-ba28ae06c2c5] 2025-06-03 14:50:20.490447 | orchestrator | 14:50:20.487 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-03 14:50:20.495189 | orchestrator | 14:50:20.494 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-03 14:50:20.495228 | orchestrator | 14:50:20.494 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-03 14:50:20.511952 | orchestrator | 14:50:20.511 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=ed9de92b-af3d-4178-85d8-fb362235eb6e] 2025-06-03 14:50:20.518304 | orchestrator | 14:50:20.518 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-03 14:50:20.533810 | orchestrator | 14:50:20.533 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=2951de99-f35b-4f27-b1a6-63f5628a8d81] 2025-06-03 14:50:20.540054 | orchestrator | 14:50:20.539 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-03 14:50:20.574320 | orchestrator | 14:50:20.574 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=31f44141-6971-4db5-beb8-c246a91f5ce9] 2025-06-03 14:50:20.581530 | orchestrator | 14:50:20.581 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-03 14:50:20.584438 | orchestrator | 14:50:20.584 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=8472aa113203cf18d7f1146d85e0b8ccf67aacde] 2025-06-03 14:50:20.589153 | orchestrator | 14:50:20.589 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-03 14:50:20.596207 | orchestrator | 14:50:20.596 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=fcdad7f2-a581-4945-a365-f13dc1f4f057] 2025-06-03 14:50:20.831149 | orchestrator | 14:50:20.830 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=c4f16882-4bb9-4b45-98df-7e8f068d9144] 2025-06-03 14:50:25.884232 | orchestrator | 14:50:25.883 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-03 14:50:26.303143 | orchestrator | 14:50:26.302 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=ec3228da-4bad-4c76-962a-ee5fe22cceb4] 2025-06-03 14:50:26.611664 | orchestrator | 14:50:26.611 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=04904c12-bc40-4e60-85ba-c599314ebf1f] 2025-06-03 14:50:26.615736 | orchestrator | 14:50:26.615 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-03 14:50:30.469951 | orchestrator | 14:50:30.469 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-03 14:50:30.488204 | orchestrator | 14:50:30.487 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-03 14:50:30.492391 | orchestrator | 14:50:30.492 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-03 14:50:30.494635 | orchestrator | 14:50:30.494 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-03 14:50:30.519838 | orchestrator | 14:50:30.519 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-03 14:50:30.541163 | orchestrator | 14:50:30.540 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-03 14:50:30.867329 | orchestrator | 14:50:30.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=e15e68c1-db55-4e4e-993f-f3c7420d4747] 2025-06-03 14:50:30.906929 | orchestrator | 14:50:30.906 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=55c4c1ce-4a0d-4db2-bd1e-96ac1249648a] 2025-06-03 14:50:30.937501 | orchestrator | 14:50:30.937 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=b41579e6-9332-4319-8cbf-d77eb525d8df] 2025-06-03 14:50:30.985567 | orchestrator | 14:50:30.985 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=daa37257-efba-4fc6-9313-1e4cfc74b56a] 2025-06-03 14:50:31.002423 | orchestrator | 14:50:31.001 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=dda24dc0-b982-41a5-9f14-a27821313269] 2025-06-03 14:50:31.042478 | orchestrator | 14:50:31.041 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=f0290b61-6b8b-4cc7-ab0c-9f653b503509] 2025-06-03 14:50:34.809357 | orchestrator | 14:50:34.809 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=2478f37b-db1a-42ae-84a0-3ead0edf7e06] 2025-06-03 14:50:34.815027 | orchestrator | 14:50:34.814 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-03 14:50:34.819048 | orchestrator | 14:50:34.818 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-03 14:50:34.821186 | orchestrator | 14:50:34.821 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-03 14:50:35.408489 | orchestrator | 14:50:35.408 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=26264801-4e97-49f7-8455-2f5f6ee0ceba] 2025-06-03 14:50:35.424783 | orchestrator | 14:50:35.424 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-03 14:50:35.425000 | orchestrator | 14:50:35.424 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-03 14:50:35.425432 | orchestrator | 14:50:35.425 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-03 14:50:35.427460 | orchestrator | 14:50:35.427 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-03 14:50:35.427497 | orchestrator | 14:50:35.427 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-03 14:50:35.432018 | orchestrator | 14:50:35.431 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-03 14:50:35.502994 | orchestrator | 14:50:35.502 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=178bbc4d-0a30-46d5-91b9-9e7a18bca3a3] 2025-06-03 14:50:35.511909 | orchestrator | 14:50:35.511 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-03 14:50:35.511984 | orchestrator | 14:50:35.511 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-03 14:50:35.515060 | orchestrator | 14:50:35.514 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-03 14:50:35.592340 | orchestrator | 14:50:35.591 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=fae9df83-514f-42d7-b437-aaf9d07c9277] 2025-06-03 14:50:35.607530 | orchestrator | 14:50:35.607 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-03 14:50:35.737550 | orchestrator | 14:50:35.737 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=4e0b3728-277f-400d-ad4f-c717089dd17f] 2025-06-03 14:50:35.756512 | orchestrator | 14:50:35.756 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-03 14:50:35.849161 | orchestrator | 14:50:35.848 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=2f28db30-0c1e-4bc8-8e27-37cc7f810712] 2025-06-03 14:50:35.859077 | orchestrator | 14:50:35.858 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-03 14:50:36.020828 | orchestrator | 14:50:36.020 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=7c050c41-10dc-4032-83f5-9cb841021610] 2025-06-03 14:50:36.035609 | orchestrator | 14:50:36.035 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-03 14:50:36.045199 | orchestrator | 14:50:36.044 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=c0ef1cc5-9379-4cef-97ae-d756fe2fd4c4] 2025-06-03 14:50:36.061568 | orchestrator | 14:50:36.061 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-03 14:50:36.318475 | orchestrator | 14:50:36.318 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=105d4b5a-3fb0-4ad1-88ef-7b149b20be44] 2025-06-03 14:50:36.334677 | orchestrator | 14:50:36.334 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-03 14:50:36.494927 | orchestrator | 14:50:36.494 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=687ecc63-5229-4478-8e31-83514a883903] 2025-06-03 14:50:36.501585 | orchestrator | 14:50:36.501 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-03 14:50:36.770235 | orchestrator | 14:50:36.769 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=9bbc931e-d5a0-4946-b4ad-08a5d042ce74] 2025-06-03 14:50:36.836425 | orchestrator | 14:50:36.836 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=721bb143-2455-4e93-8356-0f5a5668174e] 2025-06-03 14:50:41.100274 | orchestrator | 14:50:41.099 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=a7beec0e-ec5b-4f5a-bc05-d68e32b9f599] 2025-06-03 14:50:41.388158 | orchestrator | 14:50:41.387 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=d74e4632-3cd8-4cf0-a5fb-8540ce5920fe] 2025-06-03 14:50:41.417951 | orchestrator | 14:50:41.417 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=701db7b9-d6b8-4e6e-8de5-0004ab6cb10f] 2025-06-03 14:50:41.766535 | orchestrator | 14:50:41.766 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=ae423bbc-8ed3-4b27-a48e-25001c7b8dfe] 2025-06-03 14:50:41.768558 | orchestrator | 14:50:41.768 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=22b06b07-515f-4a3c-9a5d-463f3edf6e25] 2025-06-03 14:50:41.782583 | orchestrator | 14:50:41.782 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=325fe94b-d859-4a11-b386-5b555df231af] 2025-06-03 14:50:42.166713 | orchestrator | 14:50:42.166 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=006dfe64-d391-4161-8e55-2d5ee7f8403d] 2025-06-03 14:50:42.829991 | orchestrator | 14:50:42.829 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=fcbd7e6e-1c01-48f5-b37a-cd2cf35153dd] 2025-06-03 14:50:42.854878 | orchestrator | 14:50:42.854 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-03 14:50:42.866315 | orchestrator | 14:50:42.866 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-03 14:50:42.869526 | orchestrator | 14:50:42.869 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-03 14:50:42.878071 | orchestrator | 14:50:42.874 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-03 14:50:42.882544 | orchestrator | 14:50:42.882 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-03 14:50:42.882832 | orchestrator | 14:50:42.882 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-03 14:50:42.889083 | orchestrator | 14:50:42.888 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-03 14:50:49.373539 | orchestrator | 14:50:49.373 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=7bc5c1b4-1673-4c4e-b015-8f2d5a4caefa] 2025-06-03 14:50:49.397270 | orchestrator | 14:50:49.397 STDOUT terraform: local_file.inventory: Creating... 2025-06-03 14:50:49.398212 | orchestrator | 14:50:49.398 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-03 14:50:49.406088 | orchestrator | 14:50:49.402 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=2fca835740ef693d847f51808be6919155388f88] 2025-06-03 14:50:49.417122 | orchestrator | 14:50:49.416 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-03 14:50:49.420977 | orchestrator | 14:50:49.420 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=52b2ee4a7cbd3f7a463584e88c18c61c00cb0f05] 2025-06-03 14:50:50.607913 | orchestrator | 14:50:50.607 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=7bc5c1b4-1673-4c4e-b015-8f2d5a4caefa] 2025-06-03 14:50:52.869705 | orchestrator | 14:50:52.869 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-03 14:50:52.870742 | orchestrator | 14:50:52.870 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-03 14:50:52.875956 | orchestrator | 14:50:52.875 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-03 14:50:52.883467 | orchestrator | 14:50:52.883 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-03 14:50:52.883578 | orchestrator | 14:50:52.883 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-03 14:50:52.889677 | orchestrator | 14:50:52.889 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-03 14:51:02.870626 | orchestrator | 14:51:02.870 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-03 14:51:02.872288 | orchestrator | 14:51:02.871 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-03 14:51:02.877176 | orchestrator | 14:51:02.876 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-03 14:51:02.884285 | orchestrator | 14:51:02.883 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-03 14:51:02.884406 | orchestrator | 14:51:02.884 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-03 14:51:02.890629 | orchestrator | 14:51:02.890 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-03 14:51:03.415749 | orchestrator | 14:51:03.415 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=52219a62-73f2-4c6d-9a5e-392df885ba16] 2025-06-03 14:51:03.654198 | orchestrator | 14:51:03.653 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=a4fdb6db-b193-443f-89f3-a295548cc2dc] 2025-06-03 14:51:03.812992 | orchestrator | 14:51:03.812 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=18aa6507-04a4-4d8a-9789-e87d92972002] 2025-06-03 14:51:12.884598 | orchestrator | 14:51:12.884 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-06-03 14:51:12.884751 | orchestrator | 14:51:12.884 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-03 14:51:12.891733 | orchestrator | 14:51:12.891 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-03 14:51:13.356851 | orchestrator | 14:51:13.356 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=9f59e074-01ba-4d1d-926c-7282cd156302] 2025-06-03 14:51:22.885669 | orchestrator | 14:51:22.885 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-06-03 14:51:22.885793 | orchestrator | 14:51:22.885 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-06-03 14:51:23.566900 | orchestrator | 14:51:23.566 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=b290e38f-17e7-4dd7-817f-7c2796daad2c] 2025-06-03 14:51:23.774529 | orchestrator | 14:51:23.774 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=4b1bc651-2c4e-494a-9d9e-196de6091f52] 2025-06-03 14:51:23.780662 | orchestrator | 14:51:23.780 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-03 14:51:23.783675 | orchestrator | 14:51:23.783 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=7908724620789607506] 2025-06-03 14:51:23.788639 | orchestrator | 14:51:23.788 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-03 14:51:23.794696 | orchestrator | 14:51:23.794 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-03 14:51:23.815649 | orchestrator | 14:51:23.815 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-03 14:51:23.816861 | orchestrator | 14:51:23.816 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-03 14:51:23.819543 | orchestrator | 14:51:23.819 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-03 14:51:23.826175 | orchestrator | 14:51:23.825 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-03 14:51:23.827315 | orchestrator | 14:51:23.827 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-03 14:51:23.831820 | orchestrator | 14:51:23.831 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-03 14:51:23.831988 | orchestrator | 14:51:23.831 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-03 14:51:23.837114 | orchestrator | 14:51:23.834 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-03 14:51:29.127221 | orchestrator | 14:51:29.126 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=b290e38f-17e7-4dd7-817f-7c2796daad2c/c4f16882-4bb9-4b45-98df-7e8f068d9144] 2025-06-03 14:51:29.395724 | orchestrator | 14:51:29.144 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=52219a62-73f2-4c6d-9a5e-392df885ba16/2cdbec4e-06c4-422d-9c10-82dc5d1a2447] 2025-06-03 14:51:29.395785 | orchestrator | 14:51:29.158 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=18aa6507-04a4-4d8a-9789-e87d92972002/8933e5be-3d9f-49f8-8e64-ba28ae06c2c5] 2025-06-03 14:51:29.395795 | orchestrator | 14:51:29.180 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=52219a62-73f2-4c6d-9a5e-392df885ba16/fcdad7f2-a581-4945-a365-f13dc1f4f057] 2025-06-03 14:51:29.395802 | orchestrator | 14:51:29.194 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=b290e38f-17e7-4dd7-817f-7c2796daad2c/ed26131c-3f0f-451a-b8c2-bbd32b81be35] 2025-06-03 14:51:29.395810 | orchestrator | 14:51:29.214 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=18aa6507-04a4-4d8a-9789-e87d92972002/fdccfd9d-7310-474c-a0d9-9edfc2c702c2] 2025-06-03 14:51:29.395817 | orchestrator | 14:51:29.240 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=52219a62-73f2-4c6d-9a5e-392df885ba16/31f44141-6971-4db5-beb8-c246a91f5ce9] 2025-06-03 14:51:29.395825 | orchestrator | 14:51:29.249 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=b290e38f-17e7-4dd7-817f-7c2796daad2c/2951de99-f35b-4f27-b1a6-63f5628a8d81] 2025-06-03 14:51:29.395836 | orchestrator | 14:51:29.277 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=18aa6507-04a4-4d8a-9789-e87d92972002/ed9de92b-af3d-4178-85d8-fb362235eb6e] 2025-06-03 14:51:33.841976 | orchestrator | 14:51:33.841 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-03 14:51:43.843453 | orchestrator | 14:51:43.843 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-03 14:51:44.309936 | orchestrator | 14:51:44.309 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=6f69aa40-62ba-472d-8087-c15648d155e5] 2025-06-03 14:51:44.331926 | orchestrator | 14:51:44.331 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-03 14:51:44.332076 | orchestrator | 14:51:44.331 STDOUT terraform: Outputs: 2025-06-03 14:51:44.332110 | orchestrator | 14:51:44.331 STDOUT terraform: manager_address = 2025-06-03 14:51:44.332124 | orchestrator | 14:51:44.331 STDOUT terraform: private_key = 2025-06-03 14:51:44.415544 | orchestrator | ok: Runtime: 0:01:46.882188 2025-06-03 14:51:44.458862 | 2025-06-03 14:51:44.459033 | TASK [Fetch manager address] 2025-06-03 14:51:44.948127 | orchestrator | ok 2025-06-03 14:51:44.964508 | 2025-06-03 14:51:44.964780 | TASK [Set manager_host address] 2025-06-03 14:51:45.038609 | orchestrator | ok 2025-06-03 14:51:45.049651 | 2025-06-03 14:51:45.049808 | LOOP [Update ansible collections] 2025-06-03 14:51:51.279829 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-03 14:51:51.280232 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-03 14:51:51.280295 | orchestrator | Starting galaxy collection install process 2025-06-03 14:51:51.280334 | orchestrator | Process install dependency map 2025-06-03 14:51:51.280369 | orchestrator | Starting collection install process 2025-06-03 14:51:51.280401 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-06-03 14:51:51.280445 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-06-03 14:51:51.280487 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-03 14:51:51.280557 | orchestrator | ok: Item: commons Runtime: 0:00:05.900284 2025-06-03 14:51:52.847374 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-03 14:51:52.847573 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-03 14:51:52.847641 | orchestrator | Starting galaxy collection install process 2025-06-03 14:51:52.847692 | orchestrator | Process install dependency map 2025-06-03 14:51:52.847742 | orchestrator | Starting collection install process 2025-06-03 14:51:52.847802 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-06-03 14:51:52.847845 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-06-03 14:51:52.847882 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-03 14:51:52.847941 | orchestrator | ok: Item: services Runtime: 0:00:01.280882 2025-06-03 14:51:52.873891 | 2025-06-03 14:51:52.874073 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-03 14:52:03.418077 | orchestrator | ok 2025-06-03 14:52:03.428685 | 2025-06-03 14:52:03.428806 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-03 14:53:03.470709 | orchestrator | ok 2025-06-03 14:53:03.480916 | 2025-06-03 14:53:03.481041 | TASK [Fetch manager ssh hostkey] 2025-06-03 14:53:05.057969 | orchestrator | Output suppressed because no_log was given 2025-06-03 14:53:05.072101 | 2025-06-03 14:53:05.072293 | TASK [Get ssh keypair from terraform environment] 2025-06-03 14:53:05.611709 | orchestrator | ok: Runtime: 0:00:00.009023 2025-06-03 14:53:05.628419 | 2025-06-03 14:53:05.628596 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-03 14:53:05.666194 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-03 14:53:05.675654 | 2025-06-03 14:53:05.675800 | TASK [Run manager part 0] 2025-06-03 14:53:07.580660 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-03 14:53:07.810938 | orchestrator | 2025-06-03 14:53:07.811043 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-03 14:53:07.811108 | orchestrator | 2025-06-03 14:53:07.811142 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-03 14:53:09.713874 | orchestrator | ok: [testbed-manager] 2025-06-03 14:53:09.713919 | orchestrator | 2025-06-03 14:53:09.713939 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-03 14:53:09.713948 | orchestrator | 2025-06-03 14:53:09.713956 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 14:53:11.872718 | orchestrator | ok: [testbed-manager] 2025-06-03 14:53:11.872780 | orchestrator | 2025-06-03 14:53:11.872790 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-03 14:53:12.643014 | orchestrator | ok: [testbed-manager] 2025-06-03 14:53:12.643121 | orchestrator | 2025-06-03 14:53:12.643133 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-03 14:53:12.700412 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:53:12.700457 | orchestrator | 2025-06-03 14:53:12.700468 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-03 14:53:12.725581 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:53:12.725630 | orchestrator | 2025-06-03 14:53:12.725637 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-03 14:53:12.752428 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:53:12.752472 | orchestrator | 2025-06-03 14:53:12.752478 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-03 14:53:12.788495 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:53:12.788552 | orchestrator | 2025-06-03 14:53:12.788560 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-03 14:53:12.825238 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:53:12.825284 | orchestrator | 2025-06-03 14:53:12.825291 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-03 14:53:12.861576 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:53:12.861620 | orchestrator | 2025-06-03 14:53:12.861627 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-03 14:53:12.891732 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:53:12.891817 | orchestrator | 2025-06-03 14:53:12.891838 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-03 14:53:13.796781 | orchestrator | changed: [testbed-manager] 2025-06-03 14:53:13.796862 | orchestrator | 2025-06-03 14:53:13.796869 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-03 14:56:39.795764 | orchestrator | changed: [testbed-manager] 2025-06-03 14:56:39.795980 | orchestrator | 2025-06-03 14:56:39.796002 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-03 14:58:06.788560 | orchestrator | changed: [testbed-manager] 2025-06-03 14:58:06.788657 | orchestrator | 2025-06-03 14:58:06.788675 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-03 14:58:29.235743 | orchestrator | changed: [testbed-manager] 2025-06-03 14:58:29.235835 | orchestrator | 2025-06-03 14:58:29.235855 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-03 14:58:38.250273 | orchestrator | changed: [testbed-manager] 2025-06-03 14:58:38.250361 | orchestrator | 2025-06-03 14:58:38.250377 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-03 14:58:38.296704 | orchestrator | ok: [testbed-manager] 2025-06-03 14:58:38.296794 | orchestrator | 2025-06-03 14:58:38.296812 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-03 14:58:39.110816 | orchestrator | ok: [testbed-manager] 2025-06-03 14:58:39.110903 | orchestrator | 2025-06-03 14:58:39.111135 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-03 14:58:39.890380 | orchestrator | changed: [testbed-manager] 2025-06-03 14:58:39.890547 | orchestrator | 2025-06-03 14:58:39.890562 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-03 14:58:46.455555 | orchestrator | changed: [testbed-manager] 2025-06-03 14:58:46.455640 | orchestrator | 2025-06-03 14:58:46.455703 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-03 14:58:52.698888 | orchestrator | changed: [testbed-manager] 2025-06-03 14:58:52.698968 | orchestrator | 2025-06-03 14:58:52.698985 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-03 14:58:55.287885 | orchestrator | changed: [testbed-manager] 2025-06-03 14:58:55.287965 | orchestrator | 2025-06-03 14:58:55.287981 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-03 14:58:57.083370 | orchestrator | changed: [testbed-manager] 2025-06-03 14:58:57.083434 | orchestrator | 2025-06-03 14:58:57.083449 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-03 14:58:58.232807 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-03 14:58:58.232896 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-03 14:58:58.232912 | orchestrator | 2025-06-03 14:58:58.232924 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-03 14:58:58.280277 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-03 14:58:58.280326 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-03 14:58:58.280332 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-03 14:58:58.280337 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-03 14:59:07.323618 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-03 14:59:07.323687 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-03 14:59:07.323695 | orchestrator | 2025-06-03 14:59:07.323702 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-03 14:59:07.928137 | orchestrator | changed: [testbed-manager] 2025-06-03 14:59:07.928403 | orchestrator | 2025-06-03 14:59:07.928428 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-03 14:59:26.731299 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-03 14:59:26.731405 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-03 14:59:26.731425 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-03 14:59:26.731438 | orchestrator | 2025-06-03 14:59:26.731451 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-03 14:59:29.074859 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-03 14:59:29.074913 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-03 14:59:29.074919 | orchestrator | 2025-06-03 14:59:29.074924 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-03 14:59:29.074929 | orchestrator | 2025-06-03 14:59:29.074934 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 14:59:30.511058 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:30.511139 | orchestrator | 2025-06-03 14:59:30.511157 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-03 14:59:30.552951 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:30.553035 | orchestrator | 2025-06-03 14:59:30.553053 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-03 14:59:30.634488 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:30.634532 | orchestrator | 2025-06-03 14:59:30.634542 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-03 14:59:31.434417 | orchestrator | changed: [testbed-manager] 2025-06-03 14:59:31.434467 | orchestrator | 2025-06-03 14:59:31.434479 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-03 14:59:32.166970 | orchestrator | changed: [testbed-manager] 2025-06-03 14:59:32.167013 | orchestrator | 2025-06-03 14:59:32.167022 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-03 14:59:33.587801 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-03 14:59:33.587884 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-03 14:59:33.587899 | orchestrator | 2025-06-03 14:59:33.587927 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-03 14:59:35.048741 | orchestrator | changed: [testbed-manager] 2025-06-03 14:59:35.048855 | orchestrator | 2025-06-03 14:59:35.048873 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-03 14:59:36.846799 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 14:59:36.846891 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-03 14:59:36.846907 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-03 14:59:36.846919 | orchestrator | 2025-06-03 14:59:36.846932 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-03 14:59:37.438785 | orchestrator | changed: [testbed-manager] 2025-06-03 14:59:37.438837 | orchestrator | 2025-06-03 14:59:37.438847 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-03 14:59:37.511655 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:59:37.511719 | orchestrator | 2025-06-03 14:59:37.511735 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-03 14:59:38.400661 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 14:59:38.400739 | orchestrator | changed: [testbed-manager] 2025-06-03 14:59:38.400756 | orchestrator | 2025-06-03 14:59:38.400769 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-03 14:59:38.438844 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:59:38.438901 | orchestrator | 2025-06-03 14:59:38.438915 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-03 14:59:38.479814 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:59:38.479874 | orchestrator | 2025-06-03 14:59:38.479890 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-03 14:59:38.516539 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:59:38.516628 | orchestrator | 2025-06-03 14:59:38.516648 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-03 14:59:38.572392 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:59:38.572453 | orchestrator | 2025-06-03 14:59:38.572468 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-03 14:59:39.324414 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:39.324488 | orchestrator | 2025-06-03 14:59:39.324510 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-03 14:59:39.324527 | orchestrator | 2025-06-03 14:59:39.324547 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 14:59:40.741548 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:40.741637 | orchestrator | 2025-06-03 14:59:40.741653 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-03 14:59:41.721759 | orchestrator | changed: [testbed-manager] 2025-06-03 14:59:41.721797 | orchestrator | 2025-06-03 14:59:41.721803 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 14:59:41.721808 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-03 14:59:41.721813 | orchestrator | 2025-06-03 14:59:41.930340 | orchestrator | ok: Runtime: 0:06:35.853190 2025-06-03 14:59:41.948495 | 2025-06-03 14:59:41.948709 | TASK [Point out that the log in on the manager is now possible] 2025-06-03 14:59:41.990073 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-03 14:59:42.000569 | 2025-06-03 14:59:42.000702 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-03 14:59:42.049511 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-03 14:59:42.060671 | 2025-06-03 14:59:42.061056 | TASK [Run manager part 1 + 2] 2025-06-03 14:59:42.917507 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-03 14:59:42.973021 | orchestrator | 2025-06-03 14:59:42.973079 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-03 14:59:42.973086 | orchestrator | 2025-06-03 14:59:42.973099 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 14:59:45.638652 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:45.638864 | orchestrator | 2025-06-03 14:59:45.638893 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-03 14:59:45.683058 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:59:45.683109 | orchestrator | 2025-06-03 14:59:45.683120 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-03 14:59:45.723692 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:45.723742 | orchestrator | 2025-06-03 14:59:45.723751 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-03 14:59:45.770078 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:45.770132 | orchestrator | 2025-06-03 14:59:45.770141 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-03 14:59:45.843030 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:45.843091 | orchestrator | 2025-06-03 14:59:45.843105 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-03 14:59:45.901748 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:45.901804 | orchestrator | 2025-06-03 14:59:45.901815 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-03 14:59:45.946700 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-03 14:59:45.946744 | orchestrator | 2025-06-03 14:59:45.946750 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-03 14:59:46.657610 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:46.657667 | orchestrator | 2025-06-03 14:59:46.657677 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-03 14:59:46.708326 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:59:46.708378 | orchestrator | 2025-06-03 14:59:46.708386 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-03 14:59:48.114579 | orchestrator | changed: [testbed-manager] 2025-06-03 14:59:48.114648 | orchestrator | 2025-06-03 14:59:48.114660 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-03 14:59:48.728912 | orchestrator | ok: [testbed-manager] 2025-06-03 14:59:48.728968 | orchestrator | 2025-06-03 14:59:48.728978 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-03 14:59:49.915092 | orchestrator | changed: [testbed-manager] 2025-06-03 14:59:49.915146 | orchestrator | 2025-06-03 14:59:49.915157 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-03 15:00:03.943367 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:03.943465 | orchestrator | 2025-06-03 15:00:03.943482 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-03 15:00:04.622573 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:04.622642 | orchestrator | 2025-06-03 15:00:04.622658 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-03 15:00:04.678433 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:00:04.678483 | orchestrator | 2025-06-03 15:00:04.678494 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-03 15:00:05.666665 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:05.666711 | orchestrator | 2025-06-03 15:00:05.666720 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-03 15:00:06.673951 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:06.674072 | orchestrator | 2025-06-03 15:00:06.674093 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-03 15:00:07.254119 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:07.254204 | orchestrator | 2025-06-03 15:00:07.254220 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-03 15:00:07.292243 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-03 15:00:07.292348 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-03 15:00:07.292363 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-03 15:00:07.292376 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-03 15:00:11.646007 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:11.646139 | orchestrator | 2025-06-03 15:00:11.646159 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-03 15:00:20.671241 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-03 15:00:20.671283 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-03 15:00:20.671291 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-03 15:00:20.671298 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-03 15:00:20.671308 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-03 15:00:20.671314 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-03 15:00:20.671320 | orchestrator | 2025-06-03 15:00:20.671326 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-03 15:00:21.749927 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:21.749971 | orchestrator | 2025-06-03 15:00:21.749981 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-03 15:00:21.789624 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:00:21.789667 | orchestrator | 2025-06-03 15:00:21.789676 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-03 15:00:25.080246 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:25.080334 | orchestrator | 2025-06-03 15:00:25.080352 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-03 15:00:25.128538 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:00:25.128594 | orchestrator | 2025-06-03 15:00:25.128602 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-03 15:02:11.381353 | orchestrator | changed: [testbed-manager] 2025-06-03 15:02:11.381453 | orchestrator | 2025-06-03 15:02:11.381464 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-03 15:02:12.589924 | orchestrator | ok: [testbed-manager] 2025-06-03 15:02:12.589963 | orchestrator | 2025-06-03 15:02:12.589970 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:02:12.589977 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-03 15:02:12.589982 | orchestrator | 2025-06-03 15:02:13.201574 | orchestrator | ok: Runtime: 0:02:30.337596 2025-06-03 15:02:13.222189 | 2025-06-03 15:02:13.222368 | TASK [Reboot manager] 2025-06-03 15:02:14.763236 | orchestrator | ok: Runtime: 0:00:00.983127 2025-06-03 15:02:14.779902 | 2025-06-03 15:02:14.780056 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-03 15:02:30.045540 | orchestrator | ok 2025-06-03 15:02:30.055593 | 2025-06-03 15:02:30.055730 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-03 15:03:30.103603 | orchestrator | ok 2025-06-03 15:03:30.114821 | 2025-06-03 15:03:30.115003 | TASK [Deploy manager + bootstrap nodes] 2025-06-03 15:03:32.735756 | orchestrator | 2025-06-03 15:03:32.735952 | orchestrator | # DEPLOY MANAGER 2025-06-03 15:03:32.735978 | orchestrator | 2025-06-03 15:03:32.735992 | orchestrator | + set -e 2025-06-03 15:03:32.736006 | orchestrator | + echo 2025-06-03 15:03:32.736019 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-03 15:03:32.736037 | orchestrator | + echo 2025-06-03 15:03:32.736085 | orchestrator | + cat /opt/manager-vars.sh 2025-06-03 15:03:32.739831 | orchestrator | export NUMBER_OF_NODES=6 2025-06-03 15:03:32.739876 | orchestrator | 2025-06-03 15:03:32.739889 | orchestrator | export CEPH_VERSION=reef 2025-06-03 15:03:32.739903 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-03 15:03:32.739915 | orchestrator | export MANAGER_VERSION=9.1.0 2025-06-03 15:03:32.739939 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-03 15:03:32.739951 | orchestrator | 2025-06-03 15:03:32.739969 | orchestrator | export ARA=false 2025-06-03 15:03:32.740012 | orchestrator | export DEPLOY_MODE=manager 2025-06-03 15:03:32.740031 | orchestrator | export TEMPEST=false 2025-06-03 15:03:32.740043 | orchestrator | export IS_ZUUL=true 2025-06-03 15:03:32.740054 | orchestrator | 2025-06-03 15:03:32.740072 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 15:03:32.740083 | orchestrator | export EXTERNAL_API=false 2025-06-03 15:03:32.740094 | orchestrator | 2025-06-03 15:03:32.740105 | orchestrator | export IMAGE_USER=ubuntu 2025-06-03 15:03:32.740119 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-03 15:03:32.740130 | orchestrator | 2025-06-03 15:03:32.740141 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-03 15:03:32.740160 | orchestrator | 2025-06-03 15:03:32.740172 | orchestrator | + echo 2025-06-03 15:03:32.740188 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 15:03:32.740922 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 15:03:32.740969 | orchestrator | ++ INTERACTIVE=false 2025-06-03 15:03:32.740989 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 15:03:32.741002 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 15:03:32.741228 | orchestrator | + source /opt/manager-vars.sh 2025-06-03 15:03:32.741245 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-03 15:03:32.741256 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-03 15:03:32.741267 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-03 15:03:32.741278 | orchestrator | ++ CEPH_VERSION=reef 2025-06-03 15:03:32.741289 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-03 15:03:32.741299 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-03 15:03:32.741310 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-03 15:03:32.741321 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-03 15:03:32.741474 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-03 15:03:32.741500 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-03 15:03:32.741511 | orchestrator | ++ export ARA=false 2025-06-03 15:03:32.741522 | orchestrator | ++ ARA=false 2025-06-03 15:03:32.741533 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-03 15:03:32.741544 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-03 15:03:32.741554 | orchestrator | ++ export TEMPEST=false 2025-06-03 15:03:32.741565 | orchestrator | ++ TEMPEST=false 2025-06-03 15:03:32.741575 | orchestrator | ++ export IS_ZUUL=true 2025-06-03 15:03:32.741586 | orchestrator | ++ IS_ZUUL=true 2025-06-03 15:03:32.741596 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 15:03:32.741607 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 15:03:32.741618 | orchestrator | ++ export EXTERNAL_API=false 2025-06-03 15:03:32.741629 | orchestrator | ++ EXTERNAL_API=false 2025-06-03 15:03:32.741639 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-03 15:03:32.741650 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-03 15:03:32.741660 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-03 15:03:32.741671 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-03 15:03:32.741687 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-03 15:03:32.741698 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-03 15:03:32.741709 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-03 15:03:32.794846 | orchestrator | + docker version 2025-06-03 15:03:33.071931 | orchestrator | Client: Docker Engine - Community 2025-06-03 15:03:33.072010 | orchestrator | Version: 27.5.1 2025-06-03 15:03:33.072023 | orchestrator | API version: 1.47 2025-06-03 15:03:33.072031 | orchestrator | Go version: go1.22.11 2025-06-03 15:03:33.072038 | orchestrator | Git commit: 9f9e405 2025-06-03 15:03:33.072045 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-03 15:03:33.072108 | orchestrator | OS/Arch: linux/amd64 2025-06-03 15:03:33.072117 | orchestrator | Context: default 2025-06-03 15:03:33.072124 | orchestrator | 2025-06-03 15:03:33.072132 | orchestrator | Server: Docker Engine - Community 2025-06-03 15:03:33.072139 | orchestrator | Engine: 2025-06-03 15:03:33.072147 | orchestrator | Version: 27.5.1 2025-06-03 15:03:33.072154 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-03 15:03:33.072212 | orchestrator | Go version: go1.22.11 2025-06-03 15:03:33.072221 | orchestrator | Git commit: 4c9b3b0 2025-06-03 15:03:33.072228 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-03 15:03:33.072235 | orchestrator | OS/Arch: linux/amd64 2025-06-03 15:03:33.072242 | orchestrator | Experimental: false 2025-06-03 15:03:33.072250 | orchestrator | containerd: 2025-06-03 15:03:33.072267 | orchestrator | Version: 1.7.27 2025-06-03 15:03:33.072274 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-03 15:03:33.072282 | orchestrator | runc: 2025-06-03 15:03:33.072289 | orchestrator | Version: 1.2.5 2025-06-03 15:03:33.072296 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-03 15:03:33.072304 | orchestrator | docker-init: 2025-06-03 15:03:33.072311 | orchestrator | Version: 0.19.0 2025-06-03 15:03:33.072318 | orchestrator | GitCommit: de40ad0 2025-06-03 15:03:33.076331 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-03 15:03:33.085733 | orchestrator | + set -e 2025-06-03 15:03:33.086735 | orchestrator | + source /opt/manager-vars.sh 2025-06-03 15:03:33.086767 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-03 15:03:33.086779 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-03 15:03:33.086790 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-03 15:03:33.086800 | orchestrator | ++ CEPH_VERSION=reef 2025-06-03 15:03:33.086811 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-03 15:03:33.086823 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-03 15:03:33.086833 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-03 15:03:33.086844 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-03 15:03:33.086855 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-03 15:03:33.086866 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-03 15:03:33.086876 | orchestrator | ++ export ARA=false 2025-06-03 15:03:33.086888 | orchestrator | ++ ARA=false 2025-06-03 15:03:33.086899 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-03 15:03:33.086909 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-03 15:03:33.086920 | orchestrator | ++ export TEMPEST=false 2025-06-03 15:03:33.086930 | orchestrator | ++ TEMPEST=false 2025-06-03 15:03:33.086941 | orchestrator | ++ export IS_ZUUL=true 2025-06-03 15:03:33.086973 | orchestrator | ++ IS_ZUUL=true 2025-06-03 15:03:33.086984 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 15:03:33.086995 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 15:03:33.087005 | orchestrator | ++ export EXTERNAL_API=false 2025-06-03 15:03:33.087016 | orchestrator | ++ EXTERNAL_API=false 2025-06-03 15:03:33.087026 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-03 15:03:33.087037 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-03 15:03:33.087048 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-03 15:03:33.087059 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-03 15:03:33.087069 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-03 15:03:33.087080 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-03 15:03:33.087091 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 15:03:33.087102 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 15:03:33.087112 | orchestrator | ++ INTERACTIVE=false 2025-06-03 15:03:33.087123 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 15:03:33.087138 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 15:03:33.087149 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-03 15:03:33.087160 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-06-03 15:03:33.093772 | orchestrator | + set -e 2025-06-03 15:03:33.093823 | orchestrator | + VERSION=9.1.0 2025-06-03 15:03:33.093844 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-06-03 15:03:33.103242 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-03 15:03:33.103292 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-03 15:03:33.108558 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-03 15:03:33.113134 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-06-03 15:03:33.121985 | orchestrator | /opt/configuration ~ 2025-06-03 15:03:33.122059 | orchestrator | + set -e 2025-06-03 15:03:33.122074 | orchestrator | + pushd /opt/configuration 2025-06-03 15:03:33.122085 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-03 15:03:33.123625 | orchestrator | + source /opt/venv/bin/activate 2025-06-03 15:03:33.124445 | orchestrator | ++ deactivate nondestructive 2025-06-03 15:03:33.124467 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:03:33.124494 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:03:33.124796 | orchestrator | ++ hash -r 2025-06-03 15:03:33.124816 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:03:33.124829 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-03 15:03:33.124842 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-03 15:03:33.124853 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-03 15:03:33.124864 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-03 15:03:33.124874 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-03 15:03:33.124885 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-03 15:03:33.124896 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-03 15:03:33.124911 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-03 15:03:33.124923 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-03 15:03:33.124934 | orchestrator | ++ export PATH 2025-06-03 15:03:33.124946 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:03:33.125198 | orchestrator | ++ '[' -z '' ']' 2025-06-03 15:03:33.125215 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-03 15:03:33.125226 | orchestrator | ++ PS1='(venv) ' 2025-06-03 15:03:33.125237 | orchestrator | ++ export PS1 2025-06-03 15:03:33.125247 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-03 15:03:33.125258 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-03 15:03:33.125269 | orchestrator | ++ hash -r 2025-06-03 15:03:33.125280 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-06-03 15:03:34.186348 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-06-03 15:03:34.186483 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-06-03 15:03:34.188072 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-06-03 15:03:34.189285 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-06-03 15:03:34.190191 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-06-03 15:03:34.200010 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-06-03 15:03:34.201367 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-06-03 15:03:34.202351 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-06-03 15:03:34.203590 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-06-03 15:03:34.235147 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-06-03 15:03:34.236493 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-06-03 15:03:34.237993 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-06-03 15:03:34.239547 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-06-03 15:03:34.243606 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-06-03 15:03:34.455776 | orchestrator | ++ which gilt 2025-06-03 15:03:34.460112 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-06-03 15:03:34.460168 | orchestrator | + /opt/venv/bin/gilt overlay 2025-06-03 15:03:34.707989 | orchestrator | osism.cfg-generics: 2025-06-03 15:03:34.868448 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-06-03 15:03:34.868545 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-06-03 15:03:34.868897 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-06-03 15:03:34.868928 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-06-03 15:03:35.425704 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-06-03 15:03:35.438139 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-06-03 15:03:35.770569 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-06-03 15:03:35.818354 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-03 15:03:35.818510 | orchestrator | + deactivate 2025-06-03 15:03:35.818529 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-03 15:03:35.818550 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-03 15:03:35.818562 | orchestrator | + export PATH 2025-06-03 15:03:35.818574 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-03 15:03:35.818585 | orchestrator | + '[' -n '' ']' 2025-06-03 15:03:35.818597 | orchestrator | + hash -r 2025-06-03 15:03:35.818608 | orchestrator | + '[' -n '' ']' 2025-06-03 15:03:35.818619 | orchestrator | + unset VIRTUAL_ENV 2025-06-03 15:03:35.818629 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-03 15:03:35.818650 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-03 15:03:35.818661 | orchestrator | + unset -f deactivate 2025-06-03 15:03:35.818672 | orchestrator | + popd 2025-06-03 15:03:35.818683 | orchestrator | ~ 2025-06-03 15:03:35.820371 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-03 15:03:35.820424 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-03 15:03:35.821172 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-03 15:03:35.882100 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-03 15:03:35.882166 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-03 15:03:35.882179 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-03 15:03:35.925717 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-03 15:03:35.925788 | orchestrator | + source /opt/venv/bin/activate 2025-06-03 15:03:35.925812 | orchestrator | ++ deactivate nondestructive 2025-06-03 15:03:35.925835 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:03:35.925846 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:03:35.925868 | orchestrator | ++ hash -r 2025-06-03 15:03:35.925880 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:03:35.925913 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-03 15:03:35.925927 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-03 15:03:35.925938 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-03 15:03:35.925949 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-03 15:03:35.925960 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-03 15:03:35.925970 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-03 15:03:35.925992 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-03 15:03:35.926004 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-03 15:03:35.926054 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-03 15:03:35.926084 | orchestrator | ++ export PATH 2025-06-03 15:03:35.926097 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:03:35.926112 | orchestrator | ++ '[' -z '' ']' 2025-06-03 15:03:35.926124 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-03 15:03:35.926134 | orchestrator | ++ PS1='(venv) ' 2025-06-03 15:03:35.926145 | orchestrator | ++ export PS1 2025-06-03 15:03:35.926155 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-03 15:03:35.926166 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-03 15:03:35.926177 | orchestrator | ++ hash -r 2025-06-03 15:03:35.926188 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-03 15:03:37.080589 | orchestrator | 2025-06-03 15:03:37.080671 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-03 15:03:37.080687 | orchestrator | 2025-06-03 15:03:37.080699 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-03 15:03:37.657725 | orchestrator | ok: [testbed-manager] 2025-06-03 15:03:37.657807 | orchestrator | 2025-06-03 15:03:37.657838 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-03 15:03:38.661381 | orchestrator | changed: [testbed-manager] 2025-06-03 15:03:38.661486 | orchestrator | 2025-06-03 15:03:38.661501 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-03 15:03:38.661514 | orchestrator | 2025-06-03 15:03:38.661525 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:03:40.974145 | orchestrator | ok: [testbed-manager] 2025-06-03 15:03:40.974190 | orchestrator | 2025-06-03 15:03:40.974199 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-03 15:03:41.030684 | orchestrator | ok: [testbed-manager] 2025-06-03 15:03:41.030760 | orchestrator | 2025-06-03 15:03:41.030776 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-03 15:03:41.493048 | orchestrator | changed: [testbed-manager] 2025-06-03 15:03:41.493122 | orchestrator | 2025-06-03 15:03:41.493140 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-03 15:03:41.536763 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:03:41.536815 | orchestrator | 2025-06-03 15:03:41.536828 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-03 15:03:41.898068 | orchestrator | changed: [testbed-manager] 2025-06-03 15:03:41.898150 | orchestrator | 2025-06-03 15:03:41.898166 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-03 15:03:41.952172 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:03:41.952231 | orchestrator | 2025-06-03 15:03:41.952245 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-03 15:03:42.290812 | orchestrator | ok: [testbed-manager] 2025-06-03 15:03:42.290882 | orchestrator | 2025-06-03 15:03:42.290892 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-03 15:03:42.397667 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:03:42.397739 | orchestrator | 2025-06-03 15:03:42.397754 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-03 15:03:42.397766 | orchestrator | 2025-06-03 15:03:42.397777 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:03:44.238792 | orchestrator | ok: [testbed-manager] 2025-06-03 15:03:44.238882 | orchestrator | 2025-06-03 15:03:44.238897 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-03 15:03:44.343625 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-03 15:03:44.343714 | orchestrator | 2025-06-03 15:03:44.343729 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-03 15:03:44.399566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-03 15:03:44.399646 | orchestrator | 2025-06-03 15:03:44.399657 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-03 15:03:45.513896 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-03 15:03:45.514001 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-03 15:03:45.514064 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-03 15:03:45.514080 | orchestrator | 2025-06-03 15:03:45.514092 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-03 15:03:47.368588 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-03 15:03:47.368707 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-03 15:03:47.368722 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-03 15:03:47.369560 | orchestrator | 2025-06-03 15:03:47.369582 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-03 15:03:48.020954 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:03:48.021060 | orchestrator | changed: [testbed-manager] 2025-06-03 15:03:48.021078 | orchestrator | 2025-06-03 15:03:48.021091 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-03 15:03:48.642165 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:03:48.642262 | orchestrator | changed: [testbed-manager] 2025-06-03 15:03:48.642279 | orchestrator | 2025-06-03 15:03:48.642291 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-03 15:03:48.691811 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:03:48.691899 | orchestrator | 2025-06-03 15:03:48.691923 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-03 15:03:49.055177 | orchestrator | ok: [testbed-manager] 2025-06-03 15:03:49.055275 | orchestrator | 2025-06-03 15:03:49.055292 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-03 15:03:49.126237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-03 15:03:49.126315 | orchestrator | 2025-06-03 15:03:49.126329 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-03 15:03:50.245232 | orchestrator | changed: [testbed-manager] 2025-06-03 15:03:50.245334 | orchestrator | 2025-06-03 15:03:50.245351 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-03 15:03:51.040254 | orchestrator | changed: [testbed-manager] 2025-06-03 15:03:51.040363 | orchestrator | 2025-06-03 15:03:51.040415 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-03 15:04:02.095713 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:02.095806 | orchestrator | 2025-06-03 15:04:02.095838 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-03 15:04:02.153934 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:02.154077 | orchestrator | 2025-06-03 15:04:02.154098 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-03 15:04:02.154112 | orchestrator | 2025-06-03 15:04:02.154124 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:04:04.071792 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:04.071891 | orchestrator | 2025-06-03 15:04:04.071904 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-03 15:04:04.174782 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-03 15:04:04.174875 | orchestrator | 2025-06-03 15:04:04.174890 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-03 15:04:04.230540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-03 15:04:04.230623 | orchestrator | 2025-06-03 15:04:04.230638 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-03 15:04:06.848970 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:06.849056 | orchestrator | 2025-06-03 15:04:06.849073 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-03 15:04:06.890683 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:06.890754 | orchestrator | 2025-06-03 15:04:06.890768 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-03 15:04:07.012118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-03 15:04:07.012210 | orchestrator | 2025-06-03 15:04:07.012227 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-03 15:04:09.928951 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-03 15:04:09.929061 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-03 15:04:09.929077 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-03 15:04:09.929090 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-03 15:04:09.929101 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-03 15:04:09.929112 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-03 15:04:09.929123 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-03 15:04:09.929134 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-03 15:04:09.929145 | orchestrator | 2025-06-03 15:04:09.929159 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-03 15:04:10.579955 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:10.580047 | orchestrator | 2025-06-03 15:04:10.580063 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-03 15:04:11.194486 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:11.194582 | orchestrator | 2025-06-03 15:04:11.194597 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-03 15:04:11.276065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-03 15:04:11.276160 | orchestrator | 2025-06-03 15:04:11.276176 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-03 15:04:12.466006 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-03 15:04:12.466158 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-03 15:04:12.466175 | orchestrator | 2025-06-03 15:04:12.466188 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-03 15:04:13.105186 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:13.105304 | orchestrator | 2025-06-03 15:04:13.105321 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-03 15:04:13.165721 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:13.165797 | orchestrator | 2025-06-03 15:04:13.165806 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-03 15:04:13.225075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-03 15:04:13.225144 | orchestrator | 2025-06-03 15:04:13.225157 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-03 15:04:14.632311 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:04:14.632448 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:04:14.632465 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:14.632478 | orchestrator | 2025-06-03 15:04:14.632490 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-03 15:04:15.288893 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:15.288990 | orchestrator | 2025-06-03 15:04:15.289007 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-03 15:04:15.352143 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:15.352233 | orchestrator | 2025-06-03 15:04:15.352247 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-03 15:04:15.449137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-03 15:04:15.449235 | orchestrator | 2025-06-03 15:04:15.449252 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-03 15:04:16.002881 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:16.002974 | orchestrator | 2025-06-03 15:04:16.002992 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-03 15:04:16.408524 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:16.408633 | orchestrator | 2025-06-03 15:04:16.408652 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-03 15:04:17.684460 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-03 15:04:17.684560 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-03 15:04:17.684575 | orchestrator | 2025-06-03 15:04:17.684588 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-03 15:04:18.367154 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:18.367210 | orchestrator | 2025-06-03 15:04:18.367216 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-03 15:04:18.760205 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:18.760304 | orchestrator | 2025-06-03 15:04:18.760321 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-03 15:04:19.161840 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:19.161932 | orchestrator | 2025-06-03 15:04:19.161949 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-03 15:04:19.214367 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:19.214471 | orchestrator | 2025-06-03 15:04:19.214486 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-03 15:04:19.301630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-03 15:04:19.301705 | orchestrator | 2025-06-03 15:04:19.301718 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-03 15:04:19.366532 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:19.366589 | orchestrator | 2025-06-03 15:04:19.366602 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-03 15:04:21.443853 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-03 15:04:21.443980 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-03 15:04:21.443996 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-03 15:04:21.444008 | orchestrator | 2025-06-03 15:04:21.444020 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-03 15:04:22.172372 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:22.172462 | orchestrator | 2025-06-03 15:04:22.172470 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-03 15:04:22.892332 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:22.892473 | orchestrator | 2025-06-03 15:04:22.892490 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-03 15:04:23.640490 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:23.640580 | orchestrator | 2025-06-03 15:04:23.640596 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-03 15:04:23.725322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-03 15:04:23.725488 | orchestrator | 2025-06-03 15:04:23.725504 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-03 15:04:23.778476 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:23.778552 | orchestrator | 2025-06-03 15:04:23.778567 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-03 15:04:24.532273 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-03 15:04:24.532360 | orchestrator | 2025-06-03 15:04:24.532375 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-03 15:04:24.622139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-03 15:04:24.622225 | orchestrator | 2025-06-03 15:04:24.622242 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-03 15:04:25.365286 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:25.365366 | orchestrator | 2025-06-03 15:04:25.365407 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-03 15:04:25.996286 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:25.996372 | orchestrator | 2025-06-03 15:04:25.996408 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-03 15:04:26.054466 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:26.054521 | orchestrator | 2025-06-03 15:04:26.054534 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-03 15:04:26.120535 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:26.120563 | orchestrator | 2025-06-03 15:04:26.120569 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-03 15:04:26.975512 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:26.975554 | orchestrator | 2025-06-03 15:04:26.975560 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-03 15:05:28.692665 | orchestrator | changed: [testbed-manager] 2025-06-03 15:05:28.692767 | orchestrator | 2025-06-03 15:05:28.692781 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-03 15:05:29.665006 | orchestrator | ok: [testbed-manager] 2025-06-03 15:05:29.665105 | orchestrator | 2025-06-03 15:05:29.665122 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-03 15:05:29.717338 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:05:29.717477 | orchestrator | 2025-06-03 15:05:29.717500 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-03 15:05:32.432360 | orchestrator | changed: [testbed-manager] 2025-06-03 15:05:32.432485 | orchestrator | 2025-06-03 15:05:32.432501 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-03 15:05:32.491215 | orchestrator | ok: [testbed-manager] 2025-06-03 15:05:32.491308 | orchestrator | 2025-06-03 15:05:32.491321 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-03 15:05:32.491334 | orchestrator | 2025-06-03 15:05:32.491345 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-03 15:05:32.544809 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:05:32.544896 | orchestrator | 2025-06-03 15:05:32.544941 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-03 15:06:32.606285 | orchestrator | Pausing for 60 seconds 2025-06-03 15:06:32.606425 | orchestrator | changed: [testbed-manager] 2025-06-03 15:06:32.606443 | orchestrator | 2025-06-03 15:06:32.606456 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-03 15:06:36.246563 | orchestrator | changed: [testbed-manager] 2025-06-03 15:06:36.246627 | orchestrator | 2025-06-03 15:06:36.246640 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-03 15:07:17.706096 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-03 15:07:17.706211 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-03 15:07:17.706227 | orchestrator | changed: [testbed-manager] 2025-06-03 15:07:17.706240 | orchestrator | 2025-06-03 15:07:17.706252 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-03 15:07:26.928273 | orchestrator | changed: [testbed-manager] 2025-06-03 15:07:26.928438 | orchestrator | 2025-06-03 15:07:26.928478 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-03 15:07:27.004600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-03 15:07:27.004705 | orchestrator | 2025-06-03 15:07:27.004721 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-03 15:07:27.004734 | orchestrator | 2025-06-03 15:07:27.004746 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-03 15:07:27.064247 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:07:27.064323 | orchestrator | 2025-06-03 15:07:27.064336 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:07:27.064349 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-03 15:07:27.064361 | orchestrator | 2025-06-03 15:07:27.166280 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-03 15:07:27.166451 | orchestrator | + deactivate 2025-06-03 15:07:27.166473 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-03 15:07:27.166487 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-03 15:07:27.166499 | orchestrator | + export PATH 2025-06-03 15:07:27.166515 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-03 15:07:27.166527 | orchestrator | + '[' -n '' ']' 2025-06-03 15:07:27.166539 | orchestrator | + hash -r 2025-06-03 15:07:27.166550 | orchestrator | + '[' -n '' ']' 2025-06-03 15:07:27.166561 | orchestrator | + unset VIRTUAL_ENV 2025-06-03 15:07:27.166571 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-03 15:07:27.166582 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-03 15:07:27.166593 | orchestrator | + unset -f deactivate 2025-06-03 15:07:27.166605 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-03 15:07:27.172550 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-03 15:07:27.172632 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-03 15:07:27.172646 | orchestrator | + local max_attempts=60 2025-06-03 15:07:27.172658 | orchestrator | + local name=ceph-ansible 2025-06-03 15:07:27.172670 | orchestrator | + local attempt_num=1 2025-06-03 15:07:27.172900 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:07:27.203441 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:07:27.203512 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-03 15:07:27.203525 | orchestrator | + local max_attempts=60 2025-06-03 15:07:27.203536 | orchestrator | + local name=kolla-ansible 2025-06-03 15:07:27.203547 | orchestrator | + local attempt_num=1 2025-06-03 15:07:27.204358 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-03 15:07:27.243980 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:07:27.244044 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-03 15:07:27.244057 | orchestrator | + local max_attempts=60 2025-06-03 15:07:27.244068 | orchestrator | + local name=osism-ansible 2025-06-03 15:07:27.244079 | orchestrator | + local attempt_num=1 2025-06-03 15:07:27.244745 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-03 15:07:27.274874 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:07:27.274939 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-03 15:07:27.274952 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-03 15:07:27.961106 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-03 15:07:28.147989 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-03 15:07:28.148093 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-03 15:07:28.148110 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-03 15:07:28.148122 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-03 15:07:28.148135 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-03 15:07:28.148146 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-03 15:07:28.148157 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-03 15:07:28.148168 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-06-03 15:07:28.148178 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-03 15:07:28.148189 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-03 15:07:28.148200 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-03 15:07:28.148211 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-03 15:07:28.148221 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-03 15:07:28.148232 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-03 15:07:28.148243 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-03 15:07:28.157663 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-03 15:07:28.203176 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-03 15:07:28.203243 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-03 15:07:28.205478 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-03 15:07:29.937931 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:07:29.938119 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:07:29.938138 | orchestrator | Registering Redlock._release_script 2025-06-03 15:07:30.130260 | orchestrator | 2025-06-03 15:07:30 | INFO  | Task 9b0744ae-5a72-4457-a3ac-c7cedf99f780 (resolvconf) was prepared for execution. 2025-06-03 15:07:30.130349 | orchestrator | 2025-06-03 15:07:30 | INFO  | It takes a moment until task 9b0744ae-5a72-4457-a3ac-c7cedf99f780 (resolvconf) has been started and output is visible here. 2025-06-03 15:07:34.079173 | orchestrator | 2025-06-03 15:07:34.079289 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-03 15:07:34.080020 | orchestrator | 2025-06-03 15:07:34.080819 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:07:34.081610 | orchestrator | Tuesday 03 June 2025 15:07:34 +0000 (0:00:00.144) 0:00:00.144 ********** 2025-06-03 15:07:37.688060 | orchestrator | ok: [testbed-manager] 2025-06-03 15:07:37.689188 | orchestrator | 2025-06-03 15:07:37.690128 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-03 15:07:37.690733 | orchestrator | Tuesday 03 June 2025 15:07:37 +0000 (0:00:03.612) 0:00:03.757 ********** 2025-06-03 15:07:37.750583 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:07:37.750700 | orchestrator | 2025-06-03 15:07:37.752086 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-03 15:07:37.752539 | orchestrator | Tuesday 03 June 2025 15:07:37 +0000 (0:00:00.061) 0:00:03.819 ********** 2025-06-03 15:07:37.833562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-03 15:07:37.833746 | orchestrator | 2025-06-03 15:07:37.835823 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-03 15:07:37.835853 | orchestrator | Tuesday 03 June 2025 15:07:37 +0000 (0:00:00.083) 0:00:03.903 ********** 2025-06-03 15:07:37.915926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-03 15:07:37.916437 | orchestrator | 2025-06-03 15:07:37.918282 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-03 15:07:37.918306 | orchestrator | Tuesday 03 June 2025 15:07:37 +0000 (0:00:00.082) 0:00:03.986 ********** 2025-06-03 15:07:38.966259 | orchestrator | ok: [testbed-manager] 2025-06-03 15:07:38.966363 | orchestrator | 2025-06-03 15:07:38.968129 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-03 15:07:38.968856 | orchestrator | Tuesday 03 June 2025 15:07:38 +0000 (0:00:01.047) 0:00:05.033 ********** 2025-06-03 15:07:39.024614 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:07:39.025852 | orchestrator | 2025-06-03 15:07:39.026881 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-03 15:07:39.027954 | orchestrator | Tuesday 03 June 2025 15:07:39 +0000 (0:00:00.060) 0:00:05.094 ********** 2025-06-03 15:07:39.518519 | orchestrator | ok: [testbed-manager] 2025-06-03 15:07:39.519258 | orchestrator | 2025-06-03 15:07:39.519921 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-03 15:07:39.520841 | orchestrator | Tuesday 03 June 2025 15:07:39 +0000 (0:00:00.494) 0:00:05.588 ********** 2025-06-03 15:07:39.597378 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:07:39.598884 | orchestrator | 2025-06-03 15:07:39.599511 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-03 15:07:39.600241 | orchestrator | Tuesday 03 June 2025 15:07:39 +0000 (0:00:00.078) 0:00:05.667 ********** 2025-06-03 15:07:40.125844 | orchestrator | changed: [testbed-manager] 2025-06-03 15:07:40.125942 | orchestrator | 2025-06-03 15:07:40.125958 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-03 15:07:40.125972 | orchestrator | Tuesday 03 June 2025 15:07:40 +0000 (0:00:00.525) 0:00:06.193 ********** 2025-06-03 15:07:41.145983 | orchestrator | changed: [testbed-manager] 2025-06-03 15:07:41.146174 | orchestrator | 2025-06-03 15:07:41.146191 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-03 15:07:41.147498 | orchestrator | Tuesday 03 June 2025 15:07:41 +0000 (0:00:01.020) 0:00:07.214 ********** 2025-06-03 15:07:42.069869 | orchestrator | ok: [testbed-manager] 2025-06-03 15:07:42.070287 | orchestrator | 2025-06-03 15:07:42.070767 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-03 15:07:42.070794 | orchestrator | Tuesday 03 June 2025 15:07:42 +0000 (0:00:00.924) 0:00:08.139 ********** 2025-06-03 15:07:42.153692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-03 15:07:42.154201 | orchestrator | 2025-06-03 15:07:42.155227 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-03 15:07:42.155691 | orchestrator | Tuesday 03 June 2025 15:07:42 +0000 (0:00:00.083) 0:00:08.222 ********** 2025-06-03 15:07:43.278850 | orchestrator | changed: [testbed-manager] 2025-06-03 15:07:43.279059 | orchestrator | 2025-06-03 15:07:43.280206 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:07:43.280841 | orchestrator | 2025-06-03 15:07:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:07:43.281327 | orchestrator | 2025-06-03 15:07:43 | INFO  | Please wait and do not abort execution. 2025-06-03 15:07:43.282171 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:07:43.283065 | orchestrator | 2025-06-03 15:07:43.284043 | orchestrator | 2025-06-03 15:07:43.284982 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:07:43.285735 | orchestrator | Tuesday 03 June 2025 15:07:43 +0000 (0:00:01.125) 0:00:09.348 ********** 2025-06-03 15:07:43.286420 | orchestrator | =============================================================================== 2025-06-03 15:07:43.287245 | orchestrator | Gathering Facts --------------------------------------------------------- 3.61s 2025-06-03 15:07:43.287640 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2025-06-03 15:07:43.288126 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2025-06-03 15:07:43.288741 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.02s 2025-06-03 15:07:43.289513 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.92s 2025-06-03 15:07:43.290132 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2025-06-03 15:07:43.290753 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-06-03 15:07:43.291180 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-06-03 15:07:43.291853 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-03 15:07:43.292260 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-06-03 15:07:43.293093 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-03 15:07:43.293709 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-06-03 15:07:43.294453 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-06-03 15:07:43.588322 | orchestrator | + osism apply sshconfig 2025-06-03 15:07:45.113388 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:07:45.113549 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:07:45.113563 | orchestrator | Registering Redlock._release_script 2025-06-03 15:07:45.161994 | orchestrator | 2025-06-03 15:07:45 | INFO  | Task 2d778513-f13a-4a00-9614-2d6088b3a354 (sshconfig) was prepared for execution. 2025-06-03 15:07:45.162127 | orchestrator | 2025-06-03 15:07:45 | INFO  | It takes a moment until task 2d778513-f13a-4a00-9614-2d6088b3a354 (sshconfig) has been started and output is visible here. 2025-06-03 15:07:48.752238 | orchestrator | 2025-06-03 15:07:48.752345 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-03 15:07:48.752775 | orchestrator | 2025-06-03 15:07:48.754881 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-03 15:07:48.755810 | orchestrator | Tuesday 03 June 2025 15:07:48 +0000 (0:00:00.149) 0:00:00.149 ********** 2025-06-03 15:07:49.226469 | orchestrator | ok: [testbed-manager] 2025-06-03 15:07:49.226571 | orchestrator | 2025-06-03 15:07:49.226597 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-03 15:07:49.226772 | orchestrator | Tuesday 03 June 2025 15:07:49 +0000 (0:00:00.480) 0:00:00.630 ********** 2025-06-03 15:07:49.697110 | orchestrator | changed: [testbed-manager] 2025-06-03 15:07:49.697773 | orchestrator | 2025-06-03 15:07:49.698606 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-03 15:07:49.699114 | orchestrator | Tuesday 03 June 2025 15:07:49 +0000 (0:00:00.469) 0:00:01.099 ********** 2025-06-03 15:07:54.844138 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-03 15:07:54.844687 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-03 15:07:54.845881 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-03 15:07:54.846790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-03 15:07:54.848484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-03 15:07:54.849970 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-03 15:07:54.850186 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-03 15:07:54.851906 | orchestrator | 2025-06-03 15:07:54.851954 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-03 15:07:54.852532 | orchestrator | Tuesday 03 June 2025 15:07:54 +0000 (0:00:05.145) 0:00:06.245 ********** 2025-06-03 15:07:54.910239 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:07:54.910956 | orchestrator | 2025-06-03 15:07:54.911799 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-03 15:07:54.912245 | orchestrator | Tuesday 03 June 2025 15:07:54 +0000 (0:00:00.068) 0:00:06.313 ********** 2025-06-03 15:07:55.416221 | orchestrator | changed: [testbed-manager] 2025-06-03 15:07:55.417534 | orchestrator | 2025-06-03 15:07:55.418177 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:07:55.418541 | orchestrator | 2025-06-03 15:07:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:07:55.419034 | orchestrator | 2025-06-03 15:07:55 | INFO  | Please wait and do not abort execution. 2025-06-03 15:07:55.420112 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:07:55.420885 | orchestrator | 2025-06-03 15:07:55.421457 | orchestrator | 2025-06-03 15:07:55.422620 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:07:55.423043 | orchestrator | Tuesday 03 June 2025 15:07:55 +0000 (0:00:00.506) 0:00:06.820 ********** 2025-06-03 15:07:55.423529 | orchestrator | =============================================================================== 2025-06-03 15:07:55.423913 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.15s 2025-06-03 15:07:55.424653 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.51s 2025-06-03 15:07:55.424898 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.48s 2025-06-03 15:07:55.425372 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.47s 2025-06-03 15:07:55.425848 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-06-03 15:07:55.745020 | orchestrator | + osism apply known-hosts 2025-06-03 15:07:57.258524 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:07:57.258679 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:07:57.258698 | orchestrator | Registering Redlock._release_script 2025-06-03 15:07:57.314930 | orchestrator | 2025-06-03 15:07:57 | INFO  | Task 30361dc1-462a-4d2d-84dd-21650a7b88c1 (known-hosts) was prepared for execution. 2025-06-03 15:07:57.315021 | orchestrator | 2025-06-03 15:07:57 | INFO  | It takes a moment until task 30361dc1-462a-4d2d-84dd-21650a7b88c1 (known-hosts) has been started and output is visible here. 2025-06-03 15:08:01.144539 | orchestrator | 2025-06-03 15:08:01.145182 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-03 15:08:01.146709 | orchestrator | 2025-06-03 15:08:01.148120 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-03 15:08:01.148511 | orchestrator | Tuesday 03 June 2025 15:08:01 +0000 (0:00:00.167) 0:00:00.167 ********** 2025-06-03 15:08:07.155242 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-03 15:08:07.155527 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-03 15:08:07.157845 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-03 15:08:07.159280 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-03 15:08:07.160055 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-03 15:08:07.160578 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-03 15:08:07.161107 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-03 15:08:07.161599 | orchestrator | 2025-06-03 15:08:07.162119 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-03 15:08:07.162594 | orchestrator | Tuesday 03 June 2025 15:08:07 +0000 (0:00:06.010) 0:00:06.177 ********** 2025-06-03 15:08:07.330231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-03 15:08:07.330319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-03 15:08:07.330442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-03 15:08:07.331148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-03 15:08:07.332945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-03 15:08:07.334010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-03 15:08:07.334868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-03 15:08:07.335596 | orchestrator | 2025-06-03 15:08:07.336552 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:07.337497 | orchestrator | Tuesday 03 June 2025 15:08:07 +0000 (0:00:00.174) 0:00:06.351 ********** 2025-06-03 15:08:08.562227 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHx43P4/aCtLUpbPBq1c4W2DZT46nbjT6HcWGrXghvmF) 2025-06-03 15:08:08.562579 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbUm/dgxiuuDS0k6k1Py1WMO0gfcMHtOMDkE8eVKr5mnMmLo5cfjnRAhlTl25RpAu7ebYm3MME3p4zCoKLmamZCopylGHGzoRMh+9SlerJ+RqASfhPvgLRK1JS9twIMEwGNzlqIVlJBTDO828HVMWejVd1rW5+vqGV0XeK20rOuYAmpgE8H6K2m4uN/kjlpOZFz+v8vk4ZfDNVjMMsi47Q6wIWMvUCHL4qonihaIfU2c2mmLJPSYJyfvelGGWu0qJMQfzvW22fsNVkcPgO9WUm6Nu7P/gxUxrJRIvLeqQqttwaT3Y3yizjFzhi03Wws1JHFjO/oBTVwJN2QVtrbQ+TAO4qTFKWArLyqRmIG8hyC87yod7G7jg6Eiq0XP0HDbuerN53Nqdnx5D9VNW5z4FyiFglVnAo3LvUcmy1u+Z++AEMh5weUy4brjU9xzQ4nX+2TsWFuSzF2wawgO1w83jYaJ68GeKNWgkyDylWl1JIcjZr9oe8gKfZ1PnLk4uPUwU=) 2025-06-03 15:08:08.563549 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKjPTKoyrur1rFP39UtS4/PBH+lz+jb2ajBKbtZppzzlyUb9oktz3bJo5t1quyj3mZzT2Jf+6IE2aZ5dDrIIrA4=) 2025-06-03 15:08:08.563830 | orchestrator | 2025-06-03 15:08:08.564617 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:08.564976 | orchestrator | Tuesday 03 June 2025 15:08:08 +0000 (0:00:01.233) 0:00:07.585 ********** 2025-06-03 15:08:09.582664 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSlIEQxfa2ZsbUDjlJYl+g83uHFQwmLR7nEOuYgwc3EEzTveAncy9HLuI5xgOzxgl403Ije4u2V+usXMcyxokcgDfQJGFfeI683au9LSDQHYQuGa9rR58xVX3IkGEB04Lr33lz2RSqj/8aNulXSNmtivp7OZXHnmuwO/Se9CuqlZkXMMGfbyGPbtJL2ZZjObOjsrZKJnV6xOUZeDkFTNVwgbgQFgTLDNFAGYiiyLJ+jUGHYLqA6HASje2XLH+7gFb17tPdCDMh9TFVdDIs0fLLti+H+NaDGbIhAW2JOrEDGiiZ0UpuqeceHKPLvNKKm3CeoJTImekbvw5ssk9ANlcgUx0pc/2p2+spSydkT4wSHQ085TCu+5RLOpRPdfKAIqS756pcHKNmyunqVQ0AsqLaQbgiCihHchp1268qMxywq0lMaKjgax24lTMgrDYRcWRdT/WBQek84tNzC/ByLpp9TOX2Y+0hRHUmH2y24k2mNUm9iCeqiv9UVZaxXyKXt9c=) 2025-06-03 15:08:09.583438 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCVkDMDcn4en3sqO3NRMBhgxfDQWag1Ltybiaq657RKY/QSqLv+rzbHFIDEqu085GJ50OEjE3flyGt8oakLzNjI=) 2025-06-03 15:08:09.584799 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAkPep2ZSMfnll3nHynvcDJEKGtX3anQqhv/NjqKM/DR) 2025-06-03 15:08:09.585837 | orchestrator | 2025-06-03 15:08:09.586840 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:09.587458 | orchestrator | Tuesday 03 June 2025 15:08:09 +0000 (0:00:01.018) 0:00:08.604 ********** 2025-06-03 15:08:10.705858 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvJ8evS1arQVcdTEM4j77OsV6Vbmm1nUUvuSnhY/VniCXjICqfJ5CLsvFJ1zoeZek0uuwLAD5QFs+uZQU7Bka6poGdlCKTwJMbCyfLzdbejIgfKOTdVVjlZ6/Pvi2B6+Wb1JRFAS5RTD74ibZpuArcaRsyZizJ/O9ze5yK/HFs4nah23d/eeqbTIc1u6/nywOkJ9qYGhDEhdDuRJdFRhaqRkyEtF1zRa5mMUZCrPOhM1sn8f2nkDpAXEPwryDSBbJ3B8iljkfNR5JaEbeul8J+Z0dBFhwinyeD/0Cx/BBB4kTj8IX7vO8oRKOY6Vh8V2/hMfcKZwDBUfgopezbjzzwwYlSdudwe+l3NMi6pnVwNtrlsqM85WxAEMQGY6uIcjRn+YdRQ8pBA4lNeK2DvtTxtCeuk1e2U8TXAbMWOtGhufvhiLaonsFkI9Ji/4KF8fUZo35EAU3APrBi7Sd57hKR4KMWZRKw69StVgg3B8+XFNnUr3qA0PTV2v3NdoTFcQE=) 2025-06-03 15:08:10.706219 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBATXJlt/MV1GKdqDhdAOMtF3ApQxpieDd7MPqoGcljHE87Of2r2EEc1lpI7do3hpZrhxWModhdeSgeqKxgYLGjs=) 2025-06-03 15:08:10.706576 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMrS4KZ4EuKu6R0VxWtiHkpjQ9vdiVWxqF5uAaBXOtMe) 2025-06-03 15:08:10.707847 | orchestrator | 2025-06-03 15:08:10.708043 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:10.708602 | orchestrator | Tuesday 03 June 2025 15:08:10 +0000 (0:00:01.124) 0:00:09.729 ********** 2025-06-03 15:08:11.801945 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV/+HdBwTATsK9rJJE4xkDx1nEvrTNjfqz3etPYWrGLSkC5XxEKpPlw0mKupsp77bHWK2jyvmLg6ghQJYn53cjVljVn1rBoKHkE9tkkfVOvnXbise37KZ5SkLIA7Y16YouEqDHHxUKBFp/A7aZL4Yx59grjV2cu/8qQ5pm/n+VBEdhysLS9y3rU3KHYWrHjbXDIjziSpH2VjZ4C3lmlHk4TeWVDdgHi8iKsaHuPpTVyG3GHOcN0tzW5L29jYBrkIFUKSKnjjZrU7twYH1R/ApKsF1U/+Q+OSGpti6zNzvJ78H7aDa2AiqFzDNsVagChXZ2u7m/W5Kifs428kDQA1xfWUHKAUsmH3jtB00P6qB1+LYJbz+IhhOqKVaj+vHrEsxLyNc/6MtJB6P+u2LZ54lF6hmhvHE85QuaeJj8GNBcu4XNXbebrwNgFRKuuZhEFhO072Erl8c5g8gk7jBn9gqIM77iYJPpfFAm/fA2gMR0iK74N8wEINI3B9pge1FmH+0=) 2025-06-03 15:08:11.802273 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBMzbQX1N6Ff08tesqYGLOQo11cmJNNqLSsEntBriZueG/mF9lnkJvJZ9X60Tl0Op37gLmwr2tkgsbpfUvtGbwo=) 2025-06-03 15:08:11.802877 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMsw0Z7YtorOVGlb4NHtVBXK4k5fRmXYS2ljd5IsBJ02) 2025-06-03 15:08:11.803279 | orchestrator | 2025-06-03 15:08:11.804547 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:11.804703 | orchestrator | Tuesday 03 June 2025 15:08:11 +0000 (0:00:01.094) 0:00:10.824 ********** 2025-06-03 15:08:12.877883 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSn5wElLvDh6GLO/+NlAE2Bvv8IcXf5XZndpLxkm6mKF2jT8hGYTeBY8VHMhNjoG8OscMPMD/sJKCFd4h6LVpCVxgUOoXRJ+wsbavDMMavbV3CZf8CXIWDS0wQPKb3oJLvj+LnT65y+gVzko0985lknN2hWkIA2nwh5zrhvs1rhUbLKMOR+fw4BZ8aJx/L2gPVXB9RAscpGJ24UrDFYhKV4qAS99x+A73iUB/7T7EInTnfOA5Zv2mGQO5L8YwHyINNbpT5nM2bcBjzFQK8MHXDMBQxXTKTEVjdULIeHHzx8ZUrPyBc/ziROmUE333XPI2D+ktmt6xvfiv2p+aHjBrwIU04A/b569CyelssN0ZnLwH8hLtQteDhwNFlwGW+6/ndnO9edpjhlu8AmUKTomMZU2s85tZVYV55Z91z+5sOGpUAevaS8N6EWNJBH6zX5xBK7jb3D61Tq/aDho2g64kDy2dRDxxy5uMt5WQOfYfg1qaPnBt493lUeOGuqr8bnJU=) 2025-06-03 15:08:12.878530 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOspbQeSVyNNREjMS06vit3dKD84Oo+VH9vnSHoP1vBN+N6PG21cRd+VIQkFw5eAxCBHSGCuejdOdolk0jAq8pg=) 2025-06-03 15:08:12.880878 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN0nDUaSHGkzEpov7H4/h+c4MFPF0pJtNnA2U57X1dri) 2025-06-03 15:08:12.881850 | orchestrator | 2025-06-03 15:08:12.883175 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:12.883252 | orchestrator | Tuesday 03 June 2025 15:08:12 +0000 (0:00:01.075) 0:00:11.900 ********** 2025-06-03 15:08:13.964234 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOOfUG328VU7qTi7v8GmyAHPIyJjf6U4sI2b6jQ5wNSJoN4XVt8y6pK+mgFIXYRw60oz59rttRmkWqjhg1cNh5o=) 2025-06-03 15:08:13.966080 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtNTIBAWOvH0Uq5i4evqrHo3YyHMdWyEYb9y9VszxiN/Nm3H8/6G1rf2ieixCJy1nS4C4vO6fulEdKETy/JkIoZnvoKkktrU8l4FWMMxblb6t9tv30OjNZHr/sMnY00Jwr3BJPakwLXnb48kkL1+4dMxIh2GXemKk7G9lPEUkqfRg5Jp98nMNNamuUC7jspdpSo5gPz2PqQWQn6r8ggHY/P5LYo9NbqDw4byv6pBx3ovXboddRU6kE9Uk12G7zEzTiVSdSn55oc1PRQzKd1TYlhdflPjUwyX0LI5PPK35ICXWlBs41KjprbQSezirix9snT0ULi5UBeZ8bH4syGwFdPGN/4mKvNZ1qjmE6XJbmJQTZiaaqU8h6j5rNJnLT3DsVHuNIJ1KTpi0biyeqCW6Jjg7ycz/wBxVpgzXio/KKpBkAewoXbvjKDYYXh37d2jrkC8IgyJVthF7oM0N1Qw+f/owR4cncOeDFlXKAJ9I0Z2b30rzBt8zaBdUl3i3us6c=) 2025-06-03 15:08:13.966762 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAS5ECZZ7Oara/NpL42XgTcKT4No+tkpdi6RKsunEgvN) 2025-06-03 15:08:13.967687 | orchestrator | 2025-06-03 15:08:13.968551 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:13.968955 | orchestrator | Tuesday 03 June 2025 15:08:13 +0000 (0:00:01.086) 0:00:12.987 ********** 2025-06-03 15:08:15.100467 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERehLbqX6HJZQ+SqgT64eEIfhUmthqa47+qtcz/9TrSCAJMtHbIkJMLL5vMHg3Fq36XQW7cDvoSO41XNLnv3gU=) 2025-06-03 15:08:15.100913 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMSvFuTrwtIR0F7JMKpXfxhmWIHlSqzxssTS3EsUwpou6KkNGkR3+16h7Vz/XmKliSKS2TIsPygOWsbZK+4Mbl8Er3zFa7D7tbpZz4SQFjz91hxNvi3vxZXTNnnnfJ8gu2hkmVQrYQEAV+x/1Yo54DGoGeK5DMjTM/O7WEA/P0VZtVht85vQOoH0NVBSiWPoeovEo9tN9Th+OFPfcZA5US/q3EMgNGqXKCtVuZodU9y+yBEVcUlEpqMp4KMRX9RmUewBkNPRRLu87o/0WVu3emaR7YQveVHS/zxP2lDaTyOcfidCv0s5yDKeIhP7geDP5oyG7B3hadS/Pomq7c2gnqFY1Ov7LJ0VASBTyZO0+UP0jDATkntfS3aWS7/pneXzD/X4xFL6vKbHKeWxpx6Z2ghiae/ZtuPVY+Diw8WXr2evug//4zme0f1jMCAqCSPddpJX/S1mya6c486F7N03Pss2JXJ3I/xPA8NgFyXGnizOV9JuAAWYZ5umkDqJYP2rc=) 2025-06-03 15:08:15.100988 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICiK/sv5AOwkm4bW9cB8NOrhs8FgEAUGxcxbIGc1svGo) 2025-06-03 15:08:15.102735 | orchestrator | 2025-06-03 15:08:15.103561 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-03 15:08:15.103874 | orchestrator | Tuesday 03 June 2025 15:08:15 +0000 (0:00:01.135) 0:00:14.123 ********** 2025-06-03 15:08:20.410654 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-03 15:08:20.411072 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-03 15:08:20.413628 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-03 15:08:20.413657 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-03 15:08:20.413669 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-03 15:08:20.414961 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-03 15:08:20.415476 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-03 15:08:20.416478 | orchestrator | 2025-06-03 15:08:20.416587 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-03 15:08:20.417372 | orchestrator | Tuesday 03 June 2025 15:08:20 +0000 (0:00:05.310) 0:00:19.433 ********** 2025-06-03 15:08:20.585165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-03 15:08:20.585547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-03 15:08:20.586316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-03 15:08:20.587143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-03 15:08:20.588113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-03 15:08:20.588619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-03 15:08:20.589092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-03 15:08:20.590078 | orchestrator | 2025-06-03 15:08:20.590587 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:20.591451 | orchestrator | Tuesday 03 June 2025 15:08:20 +0000 (0:00:00.175) 0:00:19.609 ********** 2025-06-03 15:08:21.700129 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbUm/dgxiuuDS0k6k1Py1WMO0gfcMHtOMDkE8eVKr5mnMmLo5cfjnRAhlTl25RpAu7ebYm3MME3p4zCoKLmamZCopylGHGzoRMh+9SlerJ+RqASfhPvgLRK1JS9twIMEwGNzlqIVlJBTDO828HVMWejVd1rW5+vqGV0XeK20rOuYAmpgE8H6K2m4uN/kjlpOZFz+v8vk4ZfDNVjMMsi47Q6wIWMvUCHL4qonihaIfU2c2mmLJPSYJyfvelGGWu0qJMQfzvW22fsNVkcPgO9WUm6Nu7P/gxUxrJRIvLeqQqttwaT3Y3yizjFzhi03Wws1JHFjO/oBTVwJN2QVtrbQ+TAO4qTFKWArLyqRmIG8hyC87yod7G7jg6Eiq0XP0HDbuerN53Nqdnx5D9VNW5z4FyiFglVnAo3LvUcmy1u+Z++AEMh5weUy4brjU9xzQ4nX+2TsWFuSzF2wawgO1w83jYaJ68GeKNWgkyDylWl1JIcjZr9oe8gKfZ1PnLk4uPUwU=) 2025-06-03 15:08:21.702000 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKjPTKoyrur1rFP39UtS4/PBH+lz+jb2ajBKbtZppzzlyUb9oktz3bJo5t1quyj3mZzT2Jf+6IE2aZ5dDrIIrA4=) 2025-06-03 15:08:21.703331 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHx43P4/aCtLUpbPBq1c4W2DZT46nbjT6HcWGrXghvmF) 2025-06-03 15:08:21.704120 | orchestrator | 2025-06-03 15:08:21.705096 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:21.705797 | orchestrator | Tuesday 03 June 2025 15:08:21 +0000 (0:00:01.112) 0:00:20.722 ********** 2025-06-03 15:08:22.792026 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAkPep2ZSMfnll3nHynvcDJEKGtX3anQqhv/NjqKM/DR) 2025-06-03 15:08:22.792181 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSlIEQxfa2ZsbUDjlJYl+g83uHFQwmLR7nEOuYgwc3EEzTveAncy9HLuI5xgOzxgl403Ije4u2V+usXMcyxokcgDfQJGFfeI683au9LSDQHYQuGa9rR58xVX3IkGEB04Lr33lz2RSqj/8aNulXSNmtivp7OZXHnmuwO/Se9CuqlZkXMMGfbyGPbtJL2ZZjObOjsrZKJnV6xOUZeDkFTNVwgbgQFgTLDNFAGYiiyLJ+jUGHYLqA6HASje2XLH+7gFb17tPdCDMh9TFVdDIs0fLLti+H+NaDGbIhAW2JOrEDGiiZ0UpuqeceHKPLvNKKm3CeoJTImekbvw5ssk9ANlcgUx0pc/2p2+spSydkT4wSHQ085TCu+5RLOpRPdfKAIqS756pcHKNmyunqVQ0AsqLaQbgiCihHchp1268qMxywq0lMaKjgax24lTMgrDYRcWRdT/WBQek84tNzC/ByLpp9TOX2Y+0hRHUmH2y24k2mNUm9iCeqiv9UVZaxXyKXt9c=) 2025-06-03 15:08:22.793711 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCVkDMDcn4en3sqO3NRMBhgxfDQWag1Ltybiaq657RKY/QSqLv+rzbHFIDEqu085GJ50OEjE3flyGt8oakLzNjI=) 2025-06-03 15:08:22.794903 | orchestrator | 2025-06-03 15:08:22.795879 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:22.796643 | orchestrator | Tuesday 03 June 2025 15:08:22 +0000 (0:00:01.092) 0:00:21.815 ********** 2025-06-03 15:08:23.840580 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBATXJlt/MV1GKdqDhdAOMtF3ApQxpieDd7MPqoGcljHE87Of2r2EEc1lpI7do3hpZrhxWModhdeSgeqKxgYLGjs=) 2025-06-03 15:08:23.841169 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvJ8evS1arQVcdTEM4j77OsV6Vbmm1nUUvuSnhY/VniCXjICqfJ5CLsvFJ1zoeZek0uuwLAD5QFs+uZQU7Bka6poGdlCKTwJMbCyfLzdbejIgfKOTdVVjlZ6/Pvi2B6+Wb1JRFAS5RTD74ibZpuArcaRsyZizJ/O9ze5yK/HFs4nah23d/eeqbTIc1u6/nywOkJ9qYGhDEhdDuRJdFRhaqRkyEtF1zRa5mMUZCrPOhM1sn8f2nkDpAXEPwryDSBbJ3B8iljkfNR5JaEbeul8J+Z0dBFhwinyeD/0Cx/BBB4kTj8IX7vO8oRKOY6Vh8V2/hMfcKZwDBUfgopezbjzzwwYlSdudwe+l3NMi6pnVwNtrlsqM85WxAEMQGY6uIcjRn+YdRQ8pBA4lNeK2DvtTxtCeuk1e2U8TXAbMWOtGhufvhiLaonsFkI9Ji/4KF8fUZo35EAU3APrBi7Sd57hKR4KMWZRKw69StVgg3B8+XFNnUr3qA0PTV2v3NdoTFcQE=) 2025-06-03 15:08:23.841661 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMrS4KZ4EuKu6R0VxWtiHkpjQ9vdiVWxqF5uAaBXOtMe) 2025-06-03 15:08:23.842972 | orchestrator | 2025-06-03 15:08:23.843000 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:23.843879 | orchestrator | Tuesday 03 June 2025 15:08:23 +0000 (0:00:01.049) 0:00:22.864 ********** 2025-06-03 15:08:24.930135 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBMzbQX1N6Ff08tesqYGLOQo11cmJNNqLSsEntBriZueG/mF9lnkJvJZ9X60Tl0Op37gLmwr2tkgsbpfUvtGbwo=) 2025-06-03 15:08:24.931460 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV/+HdBwTATsK9rJJE4xkDx1nEvrTNjfqz3etPYWrGLSkC5XxEKpPlw0mKupsp77bHWK2jyvmLg6ghQJYn53cjVljVn1rBoKHkE9tkkfVOvnXbise37KZ5SkLIA7Y16YouEqDHHxUKBFp/A7aZL4Yx59grjV2cu/8qQ5pm/n+VBEdhysLS9y3rU3KHYWrHjbXDIjziSpH2VjZ4C3lmlHk4TeWVDdgHi8iKsaHuPpTVyG3GHOcN0tzW5L29jYBrkIFUKSKnjjZrU7twYH1R/ApKsF1U/+Q+OSGpti6zNzvJ78H7aDa2AiqFzDNsVagChXZ2u7m/W5Kifs428kDQA1xfWUHKAUsmH3jtB00P6qB1+LYJbz+IhhOqKVaj+vHrEsxLyNc/6MtJB6P+u2LZ54lF6hmhvHE85QuaeJj8GNBcu4XNXbebrwNgFRKuuZhEFhO072Erl8c5g8gk7jBn9gqIM77iYJPpfFAm/fA2gMR0iK74N8wEINI3B9pge1FmH+0=) 2025-06-03 15:08:24.932507 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMsw0Z7YtorOVGlb4NHtVBXK4k5fRmXYS2ljd5IsBJ02) 2025-06-03 15:08:24.933538 | orchestrator | 2025-06-03 15:08:24.934427 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:24.935036 | orchestrator | Tuesday 03 June 2025 15:08:24 +0000 (0:00:01.088) 0:00:23.952 ********** 2025-06-03 15:08:26.045581 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSn5wElLvDh6GLO/+NlAE2Bvv8IcXf5XZndpLxkm6mKF2jT8hGYTeBY8VHMhNjoG8OscMPMD/sJKCFd4h6LVpCVxgUOoXRJ+wsbavDMMavbV3CZf8CXIWDS0wQPKb3oJLvj+LnT65y+gVzko0985lknN2hWkIA2nwh5zrhvs1rhUbLKMOR+fw4BZ8aJx/L2gPVXB9RAscpGJ24UrDFYhKV4qAS99x+A73iUB/7T7EInTnfOA5Zv2mGQO5L8YwHyINNbpT5nM2bcBjzFQK8MHXDMBQxXTKTEVjdULIeHHzx8ZUrPyBc/ziROmUE333XPI2D+ktmt6xvfiv2p+aHjBrwIU04A/b569CyelssN0ZnLwH8hLtQteDhwNFlwGW+6/ndnO9edpjhlu8AmUKTomMZU2s85tZVYV55Z91z+5sOGpUAevaS8N6EWNJBH6zX5xBK7jb3D61Tq/aDho2g64kDy2dRDxxy5uMt5WQOfYfg1qaPnBt493lUeOGuqr8bnJU=) 2025-06-03 15:08:26.046987 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOspbQeSVyNNREjMS06vit3dKD84Oo+VH9vnSHoP1vBN+N6PG21cRd+VIQkFw5eAxCBHSGCuejdOdolk0jAq8pg=) 2025-06-03 15:08:26.047758 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN0nDUaSHGkzEpov7H4/h+c4MFPF0pJtNnA2U57X1dri) 2025-06-03 15:08:26.048371 | orchestrator | 2025-06-03 15:08:26.049525 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:26.050671 | orchestrator | Tuesday 03 June 2025 15:08:26 +0000 (0:00:01.116) 0:00:25.069 ********** 2025-06-03 15:08:26.988093 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtNTIBAWOvH0Uq5i4evqrHo3YyHMdWyEYb9y9VszxiN/Nm3H8/6G1rf2ieixCJy1nS4C4vO6fulEdKETy/JkIoZnvoKkktrU8l4FWMMxblb6t9tv30OjNZHr/sMnY00Jwr3BJPakwLXnb48kkL1+4dMxIh2GXemKk7G9lPEUkqfRg5Jp98nMNNamuUC7jspdpSo5gPz2PqQWQn6r8ggHY/P5LYo9NbqDw4byv6pBx3ovXboddRU6kE9Uk12G7zEzTiVSdSn55oc1PRQzKd1TYlhdflPjUwyX0LI5PPK35ICXWlBs41KjprbQSezirix9snT0ULi5UBeZ8bH4syGwFdPGN/4mKvNZ1qjmE6XJbmJQTZiaaqU8h6j5rNJnLT3DsVHuNIJ1KTpi0biyeqCW6Jjg7ycz/wBxVpgzXio/KKpBkAewoXbvjKDYYXh37d2jrkC8IgyJVthF7oM0N1Qw+f/owR4cncOeDFlXKAJ9I0Z2b30rzBt8zaBdUl3i3us6c=) 2025-06-03 15:08:26.989237 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOOfUG328VU7qTi7v8GmyAHPIyJjf6U4sI2b6jQ5wNSJoN4XVt8y6pK+mgFIXYRw60oz59rttRmkWqjhg1cNh5o=) 2025-06-03 15:08:26.990800 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAS5ECZZ7Oara/NpL42XgTcKT4No+tkpdi6RKsunEgvN) 2025-06-03 15:08:26.991888 | orchestrator | 2025-06-03 15:08:26.992443 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:26.993220 | orchestrator | Tuesday 03 June 2025 15:08:26 +0000 (0:00:00.942) 0:00:26.012 ********** 2025-06-03 15:08:27.962669 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERehLbqX6HJZQ+SqgT64eEIfhUmthqa47+qtcz/9TrSCAJMtHbIkJMLL5vMHg3Fq36XQW7cDvoSO41XNLnv3gU=) 2025-06-03 15:08:27.963914 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMSvFuTrwtIR0F7JMKpXfxhmWIHlSqzxssTS3EsUwpou6KkNGkR3+16h7Vz/XmKliSKS2TIsPygOWsbZK+4Mbl8Er3zFa7D7tbpZz4SQFjz91hxNvi3vxZXTNnnnfJ8gu2hkmVQrYQEAV+x/1Yo54DGoGeK5DMjTM/O7WEA/P0VZtVht85vQOoH0NVBSiWPoeovEo9tN9Th+OFPfcZA5US/q3EMgNGqXKCtVuZodU9y+yBEVcUlEpqMp4KMRX9RmUewBkNPRRLu87o/0WVu3emaR7YQveVHS/zxP2lDaTyOcfidCv0s5yDKeIhP7geDP5oyG7B3hadS/Pomq7c2gnqFY1Ov7LJ0VASBTyZO0+UP0jDATkntfS3aWS7/pneXzD/X4xFL6vKbHKeWxpx6Z2ghiae/ZtuPVY+Diw8WXr2evug//4zme0f1jMCAqCSPddpJX/S1mya6c486F7N03Pss2JXJ3I/xPA8NgFyXGnizOV9JuAAWYZ5umkDqJYP2rc=) 2025-06-03 15:08:27.964602 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICiK/sv5AOwkm4bW9cB8NOrhs8FgEAUGxcxbIGc1svGo) 2025-06-03 15:08:27.965062 | orchestrator | 2025-06-03 15:08:27.965800 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-03 15:08:27.968110 | orchestrator | Tuesday 03 June 2025 15:08:27 +0000 (0:00:00.974) 0:00:26.987 ********** 2025-06-03 15:08:28.101529 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-03 15:08:28.101700 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-03 15:08:28.102263 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-03 15:08:28.103690 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-03 15:08:28.103712 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-03 15:08:28.103724 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-03 15:08:28.104376 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-03 15:08:28.104425 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:08:28.104441 | orchestrator | 2025-06-03 15:08:28.104897 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-03 15:08:28.105061 | orchestrator | Tuesday 03 June 2025 15:08:28 +0000 (0:00:00.140) 0:00:27.127 ********** 2025-06-03 15:08:28.164212 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:08:28.164320 | orchestrator | 2025-06-03 15:08:28.164408 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-03 15:08:28.164436 | orchestrator | Tuesday 03 June 2025 15:08:28 +0000 (0:00:00.062) 0:00:27.190 ********** 2025-06-03 15:08:28.223087 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:08:28.224033 | orchestrator | 2025-06-03 15:08:28.224837 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-03 15:08:28.225666 | orchestrator | Tuesday 03 June 2025 15:08:28 +0000 (0:00:00.057) 0:00:27.247 ********** 2025-06-03 15:08:28.677221 | orchestrator | changed: [testbed-manager] 2025-06-03 15:08:28.679592 | orchestrator | 2025-06-03 15:08:28.679799 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:08:28.680190 | orchestrator | 2025-06-03 15:08:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:08:28.680217 | orchestrator | 2025-06-03 15:08:28 | INFO  | Please wait and do not abort execution. 2025-06-03 15:08:28.682176 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:08:28.682360 | orchestrator | 2025-06-03 15:08:28.683332 | orchestrator | 2025-06-03 15:08:28.684047 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:08:28.684730 | orchestrator | Tuesday 03 June 2025 15:08:28 +0000 (0:00:00.452) 0:00:27.700 ********** 2025-06-03 15:08:28.685379 | orchestrator | =============================================================================== 2025-06-03 15:08:28.685871 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.01s 2025-06-03 15:08:28.686547 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.31s 2025-06-03 15:08:28.686876 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-06-03 15:08:28.687491 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-06-03 15:08:28.688131 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-06-03 15:08:28.688481 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-06-03 15:08:28.688955 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-06-03 15:08:28.689342 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-03 15:08:28.689972 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-03 15:08:28.690243 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-03 15:08:28.690778 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-03 15:08:28.691083 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-03 15:08:28.691612 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-03 15:08:28.692053 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-03 15:08:28.692546 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-06-03 15:08:28.692990 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-06-03 15:08:28.693337 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.45s 2025-06-03 15:08:28.693821 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-06-03 15:08:28.694141 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-06-03 15:08:28.694641 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.14s 2025-06-03 15:08:29.013963 | orchestrator | + osism apply squid 2025-06-03 15:08:30.521724 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:08:30.521891 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:08:30.521934 | orchestrator | Registering Redlock._release_script 2025-06-03 15:08:30.571665 | orchestrator | 2025-06-03 15:08:30 | INFO  | Task 18d07ca2-ae9f-43ed-948a-bb4f95ba22dc (squid) was prepared for execution. 2025-06-03 15:08:30.571743 | orchestrator | 2025-06-03 15:08:30 | INFO  | It takes a moment until task 18d07ca2-ae9f-43ed-948a-bb4f95ba22dc (squid) has been started and output is visible here. 2025-06-03 15:08:34.157863 | orchestrator | 2025-06-03 15:08:34.159467 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-03 15:08:34.159564 | orchestrator | 2025-06-03 15:08:34.160139 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-03 15:08:34.162125 | orchestrator | Tuesday 03 June 2025 15:08:34 +0000 (0:00:00.152) 0:00:00.152 ********** 2025-06-03 15:08:34.227741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-03 15:08:34.228204 | orchestrator | 2025-06-03 15:08:34.228841 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-03 15:08:34.229722 | orchestrator | Tuesday 03 June 2025 15:08:34 +0000 (0:00:00.071) 0:00:00.223 ********** 2025-06-03 15:08:35.550776 | orchestrator | ok: [testbed-manager] 2025-06-03 15:08:35.551033 | orchestrator | 2025-06-03 15:08:35.551461 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-03 15:08:35.552184 | orchestrator | Tuesday 03 June 2025 15:08:35 +0000 (0:00:01.324) 0:00:01.547 ********** 2025-06-03 15:08:36.678261 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-03 15:08:36.678647 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-03 15:08:36.680130 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-03 15:08:36.680500 | orchestrator | 2025-06-03 15:08:36.681051 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-03 15:08:36.681360 | orchestrator | Tuesday 03 June 2025 15:08:36 +0000 (0:00:01.126) 0:00:02.674 ********** 2025-06-03 15:08:37.718362 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-03 15:08:37.718511 | orchestrator | 2025-06-03 15:08:37.719755 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-03 15:08:37.720972 | orchestrator | Tuesday 03 June 2025 15:08:37 +0000 (0:00:01.039) 0:00:03.713 ********** 2025-06-03 15:08:38.073486 | orchestrator | ok: [testbed-manager] 2025-06-03 15:08:38.073583 | orchestrator | 2025-06-03 15:08:38.073854 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-03 15:08:38.074165 | orchestrator | Tuesday 03 June 2025 15:08:38 +0000 (0:00:00.357) 0:00:04.070 ********** 2025-06-03 15:08:38.997962 | orchestrator | changed: [testbed-manager] 2025-06-03 15:08:38.998365 | orchestrator | 2025-06-03 15:08:38.999605 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-03 15:08:39.000125 | orchestrator | Tuesday 03 June 2025 15:08:38 +0000 (0:00:00.922) 0:00:04.993 ********** 2025-06-03 15:09:10.751200 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-03 15:09:10.751318 | orchestrator | ok: [testbed-manager] 2025-06-03 15:09:10.751520 | orchestrator | 2025-06-03 15:09:10.753070 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-03 15:09:10.753421 | orchestrator | Tuesday 03 June 2025 15:09:10 +0000 (0:00:31.749) 0:00:36.743 ********** 2025-06-03 15:09:22.931620 | orchestrator | changed: [testbed-manager] 2025-06-03 15:09:22.932176 | orchestrator | 2025-06-03 15:09:22.932880 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-03 15:09:22.934507 | orchestrator | Tuesday 03 June 2025 15:09:22 +0000 (0:00:12.181) 0:00:48.924 ********** 2025-06-03 15:10:23.006446 | orchestrator | Pausing for 60 seconds 2025-06-03 15:10:23.006560 | orchestrator | changed: [testbed-manager] 2025-06-03 15:10:23.006577 | orchestrator | 2025-06-03 15:10:23.007064 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-03 15:10:23.007797 | orchestrator | Tuesday 03 June 2025 15:10:22 +0000 (0:01:00.072) 0:01:48.997 ********** 2025-06-03 15:10:23.073475 | orchestrator | ok: [testbed-manager] 2025-06-03 15:10:23.074318 | orchestrator | 2025-06-03 15:10:23.075201 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-03 15:10:23.075935 | orchestrator | Tuesday 03 June 2025 15:10:23 +0000 (0:00:00.070) 0:01:49.068 ********** 2025-06-03 15:10:23.714829 | orchestrator | changed: [testbed-manager] 2025-06-03 15:10:23.715542 | orchestrator | 2025-06-03 15:10:23.716129 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:10:23.716585 | orchestrator | 2025-06-03 15:10:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:10:23.716692 | orchestrator | 2025-06-03 15:10:23 | INFO  | Please wait and do not abort execution. 2025-06-03 15:10:23.717622 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:10:23.718166 | orchestrator | 2025-06-03 15:10:23.718655 | orchestrator | 2025-06-03 15:10:23.719168 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:10:23.719798 | orchestrator | Tuesday 03 June 2025 15:10:23 +0000 (0:00:00.643) 0:01:49.712 ********** 2025-06-03 15:10:23.720452 | orchestrator | =============================================================================== 2025-06-03 15:10:23.721047 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-03 15:10:23.721940 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.75s 2025-06-03 15:10:23.722610 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.18s 2025-06-03 15:10:23.723505 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.32s 2025-06-03 15:10:23.724193 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2025-06-03 15:10:23.724584 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.04s 2025-06-03 15:10:23.725114 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-06-03 15:10:23.725796 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2025-06-03 15:10:23.726181 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-06-03 15:10:23.726837 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-06-03 15:10:23.727273 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-06-03 15:10:24.215956 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-03 15:10:24.216057 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-06-03 15:10:24.219233 | orchestrator | ++ semver 9.1.0 9.0.0 2025-06-03 15:10:24.292310 | orchestrator | + [[ 1 -lt 0 ]] 2025-06-03 15:10:24.292700 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-03 15:10:25.940303 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:10:25.940423 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:10:25.940438 | orchestrator | Registering Redlock._release_script 2025-06-03 15:10:26.000032 | orchestrator | 2025-06-03 15:10:25 | INFO  | Task 6f27806e-f3b9-4a06-a661-0bc48cb54956 (operator) was prepared for execution. 2025-06-03 15:10:26.000114 | orchestrator | 2025-06-03 15:10:25 | INFO  | It takes a moment until task 6f27806e-f3b9-4a06-a661-0bc48cb54956 (operator) has been started and output is visible here. 2025-06-03 15:10:29.905779 | orchestrator | 2025-06-03 15:10:29.906363 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-03 15:10:29.907232 | orchestrator | 2025-06-03 15:10:29.908229 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:10:29.909844 | orchestrator | Tuesday 03 June 2025 15:10:29 +0000 (0:00:00.147) 0:00:00.147 ********** 2025-06-03 15:10:33.092657 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:10:33.092781 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:10:33.093200 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:10:33.093662 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:10:33.094277 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:10:33.094866 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:10:33.096061 | orchestrator | 2025-06-03 15:10:33.096156 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-03 15:10:33.096510 | orchestrator | Tuesday 03 June 2025 15:10:33 +0000 (0:00:03.189) 0:00:03.336 ********** 2025-06-03 15:10:33.878455 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:10:33.878715 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:10:33.880141 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:10:33.881159 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:10:33.885146 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:10:33.885177 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:10:33.885188 | orchestrator | 2025-06-03 15:10:33.885201 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-03 15:10:33.885952 | orchestrator | 2025-06-03 15:10:33.886942 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-03 15:10:33.887235 | orchestrator | Tuesday 03 June 2025 15:10:33 +0000 (0:00:00.785) 0:00:04.122 ********** 2025-06-03 15:10:33.942203 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:10:33.965272 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:10:33.989043 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:10:34.041473 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:10:34.042161 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:10:34.043207 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:10:34.043565 | orchestrator | 2025-06-03 15:10:34.045239 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-03 15:10:34.045623 | orchestrator | Tuesday 03 June 2025 15:10:34 +0000 (0:00:00.161) 0:00:04.283 ********** 2025-06-03 15:10:34.110159 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:10:34.158792 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:10:34.212618 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:10:34.213634 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:10:34.214390 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:10:34.215221 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:10:34.216135 | orchestrator | 2025-06-03 15:10:34.216855 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-03 15:10:34.217609 | orchestrator | Tuesday 03 June 2025 15:10:34 +0000 (0:00:00.173) 0:00:04.457 ********** 2025-06-03 15:10:34.850334 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:10:34.850874 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:10:34.852205 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:10:34.852724 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:10:34.854141 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:10:34.855278 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:10:34.855932 | orchestrator | 2025-06-03 15:10:34.856838 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-03 15:10:34.857647 | orchestrator | Tuesday 03 June 2025 15:10:34 +0000 (0:00:00.636) 0:00:05.094 ********** 2025-06-03 15:10:35.650161 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:10:35.650394 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:10:35.653949 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:10:35.654453 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:10:35.654827 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:10:35.657875 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:10:35.662467 | orchestrator | 2025-06-03 15:10:35.662503 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-03 15:10:35.664246 | orchestrator | Tuesday 03 June 2025 15:10:35 +0000 (0:00:00.797) 0:00:05.891 ********** 2025-06-03 15:10:36.791037 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-03 15:10:36.792500 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-03 15:10:36.797160 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-03 15:10:36.797202 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-03 15:10:36.798293 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-03 15:10:36.798892 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-03 15:10:36.799134 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-03 15:10:36.799933 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-03 15:10:36.800482 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-03 15:10:36.802098 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-03 15:10:36.802134 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-03 15:10:36.802194 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-03 15:10:36.802833 | orchestrator | 2025-06-03 15:10:36.803788 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-03 15:10:36.804626 | orchestrator | Tuesday 03 June 2025 15:10:36 +0000 (0:00:01.140) 0:00:07.031 ********** 2025-06-03 15:10:37.935273 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:10:37.936338 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:10:37.937423 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:10:37.938273 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:10:37.938883 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:10:37.939572 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:10:37.940385 | orchestrator | 2025-06-03 15:10:37.941279 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-03 15:10:37.941976 | orchestrator | Tuesday 03 June 2025 15:10:37 +0000 (0:00:01.146) 0:00:08.178 ********** 2025-06-03 15:10:39.061769 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-03 15:10:39.061861 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-03 15:10:39.061868 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-03 15:10:39.117320 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:10:39.117573 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:10:39.118272 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:10:39.119006 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:10:39.119827 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:10:39.120753 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:10:39.121560 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-03 15:10:39.122212 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-03 15:10:39.122730 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-03 15:10:39.123216 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-03 15:10:39.123905 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-03 15:10:39.124410 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-03 15:10:39.124830 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:10:39.125199 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:10:39.125779 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:10:39.126095 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:10:39.126567 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:10:39.127000 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:10:39.127322 | orchestrator | 2025-06-03 15:10:39.127672 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-03 15:10:39.127955 | orchestrator | Tuesday 03 June 2025 15:10:39 +0000 (0:00:01.182) 0:00:09.361 ********** 2025-06-03 15:10:39.683667 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:10:39.683771 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:10:39.684590 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:10:39.685184 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:10:39.686605 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:10:39.686830 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:10:39.687502 | orchestrator | 2025-06-03 15:10:39.687830 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-03 15:10:39.688514 | orchestrator | Tuesday 03 June 2025 15:10:39 +0000 (0:00:00.566) 0:00:09.927 ********** 2025-06-03 15:10:39.748804 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:10:39.770508 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:10:39.796478 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:10:39.837975 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:10:39.838570 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:10:39.838991 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:10:39.839652 | orchestrator | 2025-06-03 15:10:39.840203 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-03 15:10:39.840976 | orchestrator | Tuesday 03 June 2025 15:10:39 +0000 (0:00:00.156) 0:00:10.083 ********** 2025-06-03 15:10:40.536463 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:10:40.541061 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:10:40.544215 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-03 15:10:40.544411 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:10:40.544429 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:10:40.545224 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:10:40.545848 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:10:40.546301 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:10:40.547161 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:10:40.547553 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:10:40.548291 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-03 15:10:40.548645 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:10:40.549126 | orchestrator | 2025-06-03 15:10:40.549669 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-03 15:10:40.549978 | orchestrator | Tuesday 03 June 2025 15:10:40 +0000 (0:00:00.696) 0:00:10.779 ********** 2025-06-03 15:10:40.604287 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:10:40.631200 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:10:40.650600 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:10:40.695682 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:10:40.697033 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:10:40.698094 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:10:40.698779 | orchestrator | 2025-06-03 15:10:40.699745 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-03 15:10:40.700210 | orchestrator | Tuesday 03 June 2025 15:10:40 +0000 (0:00:00.160) 0:00:10.940 ********** 2025-06-03 15:10:40.758559 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:10:40.775315 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:10:40.792275 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:10:40.838095 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:10:40.839147 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:10:40.842967 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:10:40.843706 | orchestrator | 2025-06-03 15:10:40.844742 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-03 15:10:40.845455 | orchestrator | Tuesday 03 June 2025 15:10:40 +0000 (0:00:00.142) 0:00:11.082 ********** 2025-06-03 15:10:40.891716 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:10:40.910438 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:10:40.949315 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:10:40.973649 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:10:40.974761 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:10:40.975596 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:10:40.976665 | orchestrator | 2025-06-03 15:10:40.978136 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-03 15:10:40.979072 | orchestrator | Tuesday 03 June 2025 15:10:40 +0000 (0:00:00.135) 0:00:11.218 ********** 2025-06-03 15:10:41.590754 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:10:41.593319 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:10:41.594628 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:10:41.595730 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:10:41.596598 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:10:41.598252 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:10:41.599421 | orchestrator | 2025-06-03 15:10:41.599867 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-03 15:10:41.600509 | orchestrator | Tuesday 03 June 2025 15:10:41 +0000 (0:00:00.615) 0:00:11.834 ********** 2025-06-03 15:10:41.663936 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:10:41.683051 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:10:41.769396 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:10:41.770585 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:10:41.772004 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:10:41.773002 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:10:41.774086 | orchestrator | 2025-06-03 15:10:41.775179 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:10:41.775758 | orchestrator | 2025-06-03 15:10:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:10:41.776275 | orchestrator | 2025-06-03 15:10:41 | INFO  | Please wait and do not abort execution. 2025-06-03 15:10:41.777854 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:10:41.778470 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:10:41.778851 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:10:41.779666 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:10:41.780693 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:10:41.781103 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:10:41.782160 | orchestrator | 2025-06-03 15:10:41.782487 | orchestrator | 2025-06-03 15:10:41.783157 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:10:41.783628 | orchestrator | Tuesday 03 June 2025 15:10:41 +0000 (0:00:00.181) 0:00:12.015 ********** 2025-06-03 15:10:41.784295 | orchestrator | =============================================================================== 2025-06-03 15:10:41.785420 | orchestrator | Gathering Facts --------------------------------------------------------- 3.19s 2025-06-03 15:10:41.785778 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.18s 2025-06-03 15:10:41.786920 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.15s 2025-06-03 15:10:41.786990 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.14s 2025-06-03 15:10:41.787806 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2025-06-03 15:10:41.788574 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2025-06-03 15:10:41.788829 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-06-03 15:10:41.789787 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2025-06-03 15:10:41.790367 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.62s 2025-06-03 15:10:41.790963 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-06-03 15:10:41.791636 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.18s 2025-06-03 15:10:41.792579 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-06-03 15:10:41.793487 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-06-03 15:10:41.793597 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-06-03 15:10:41.794694 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-06-03 15:10:41.795712 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-06-03 15:10:41.796989 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-06-03 15:10:42.207298 | orchestrator | + osism apply --environment custom facts 2025-06-03 15:10:43.822574 | orchestrator | 2025-06-03 15:10:43 | INFO  | Trying to run play facts in environment custom 2025-06-03 15:10:43.827087 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:10:43.827123 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:10:43.827136 | orchestrator | Registering Redlock._release_script 2025-06-03 15:10:43.880851 | orchestrator | 2025-06-03 15:10:43 | INFO  | Task 089bd83b-3f7c-47d3-b548-6ba9d03bf052 (facts) was prepared for execution. 2025-06-03 15:10:43.880925 | orchestrator | 2025-06-03 15:10:43 | INFO  | It takes a moment until task 089bd83b-3f7c-47d3-b548-6ba9d03bf052 (facts) has been started and output is visible here. 2025-06-03 15:10:47.747967 | orchestrator | 2025-06-03 15:10:47.748166 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-03 15:10:47.752214 | orchestrator | 2025-06-03 15:10:47.753003 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-03 15:10:47.753707 | orchestrator | Tuesday 03 June 2025 15:10:47 +0000 (0:00:00.083) 0:00:00.083 ********** 2025-06-03 15:10:49.083781 | orchestrator | ok: [testbed-manager] 2025-06-03 15:10:49.085049 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:10:49.086182 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:10:49.087026 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:10:49.088454 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:10:49.089090 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:10:49.089999 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:10:49.090476 | orchestrator | 2025-06-03 15:10:49.091369 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-03 15:10:49.091962 | orchestrator | Tuesday 03 June 2025 15:10:49 +0000 (0:00:01.335) 0:00:01.418 ********** 2025-06-03 15:10:50.209758 | orchestrator | ok: [testbed-manager] 2025-06-03 15:10:50.212026 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:10:50.213116 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:10:50.214012 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:10:50.215151 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:10:50.215863 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:10:50.216462 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:10:50.217458 | orchestrator | 2025-06-03 15:10:50.218111 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-03 15:10:50.219842 | orchestrator | 2025-06-03 15:10:50.219875 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-03 15:10:50.220128 | orchestrator | Tuesday 03 June 2025 15:10:50 +0000 (0:00:01.127) 0:00:02.545 ********** 2025-06-03 15:10:50.341953 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:10:50.342614 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:10:50.343318 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:10:50.345992 | orchestrator | 2025-06-03 15:10:50.346086 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-03 15:10:50.346100 | orchestrator | Tuesday 03 June 2025 15:10:50 +0000 (0:00:00.133) 0:00:02.679 ********** 2025-06-03 15:10:50.540513 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:10:50.540604 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:10:50.540618 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:10:50.541296 | orchestrator | 2025-06-03 15:10:50.542191 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-03 15:10:50.542822 | orchestrator | Tuesday 03 June 2025 15:10:50 +0000 (0:00:00.199) 0:00:02.878 ********** 2025-06-03 15:10:50.756771 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:10:50.757184 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:10:50.757888 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:10:50.758216 | orchestrator | 2025-06-03 15:10:50.758747 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-03 15:10:50.759431 | orchestrator | Tuesday 03 June 2025 15:10:50 +0000 (0:00:00.216) 0:00:03.095 ********** 2025-06-03 15:10:50.904239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:10:50.905324 | orchestrator | 2025-06-03 15:10:50.906692 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-03 15:10:50.908050 | orchestrator | Tuesday 03 June 2025 15:10:50 +0000 (0:00:00.143) 0:00:03.238 ********** 2025-06-03 15:10:51.316091 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:10:51.316183 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:10:51.317072 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:10:51.317984 | orchestrator | 2025-06-03 15:10:51.318867 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-03 15:10:51.319570 | orchestrator | Tuesday 03 June 2025 15:10:51 +0000 (0:00:00.414) 0:00:03.653 ********** 2025-06-03 15:10:51.425717 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:10:51.426553 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:10:51.429628 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:10:51.429653 | orchestrator | 2025-06-03 15:10:51.429664 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-03 15:10:51.429676 | orchestrator | Tuesday 03 June 2025 15:10:51 +0000 (0:00:00.109) 0:00:03.763 ********** 2025-06-03 15:10:52.350200 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:10:52.350480 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:10:52.351504 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:10:52.352525 | orchestrator | 2025-06-03 15:10:52.353415 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-03 15:10:52.354066 | orchestrator | Tuesday 03 June 2025 15:10:52 +0000 (0:00:00.922) 0:00:04.686 ********** 2025-06-03 15:10:52.728435 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:10:52.729087 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:10:52.729433 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:10:52.731254 | orchestrator | 2025-06-03 15:10:52.732474 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-03 15:10:52.733502 | orchestrator | Tuesday 03 June 2025 15:10:52 +0000 (0:00:00.378) 0:00:05.064 ********** 2025-06-03 15:10:53.688648 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:10:53.689816 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:10:53.690401 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:10:53.691591 | orchestrator | 2025-06-03 15:10:53.692172 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-03 15:10:53.692948 | orchestrator | Tuesday 03 June 2025 15:10:53 +0000 (0:00:00.960) 0:00:06.024 ********** 2025-06-03 15:11:05.976735 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:05.976800 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:05.976807 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:05.977169 | orchestrator | 2025-06-03 15:11:05.977858 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-03 15:11:05.978041 | orchestrator | Tuesday 03 June 2025 15:11:05 +0000 (0:00:12.286) 0:00:18.311 ********** 2025-06-03 15:11:06.106163 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:06.106221 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:06.106578 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:06.107139 | orchestrator | 2025-06-03 15:11:06.108244 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-03 15:11:06.108718 | orchestrator | Tuesday 03 June 2025 15:11:06 +0000 (0:00:00.132) 0:00:18.444 ********** 2025-06-03 15:11:11.985609 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:11.985958 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:11.986898 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:11.987709 | orchestrator | 2025-06-03 15:11:11.989165 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-03 15:11:11.990111 | orchestrator | Tuesday 03 June 2025 15:11:11 +0000 (0:00:05.877) 0:00:24.321 ********** 2025-06-03 15:11:12.409554 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:12.410179 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:12.412026 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:12.412872 | orchestrator | 2025-06-03 15:11:12.414747 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-03 15:11:12.415219 | orchestrator | Tuesday 03 June 2025 15:11:12 +0000 (0:00:00.424) 0:00:24.745 ********** 2025-06-03 15:11:15.857937 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-03 15:11:15.858177 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-03 15:11:15.859229 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-03 15:11:15.862439 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-03 15:11:15.864667 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-03 15:11:15.865579 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-03 15:11:15.866380 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-03 15:11:15.867394 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-03 15:11:15.868071 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-03 15:11:15.868748 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-03 15:11:15.869579 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-03 15:11:15.870949 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-03 15:11:15.871813 | orchestrator | 2025-06-03 15:11:15.872924 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-03 15:11:15.873704 | orchestrator | Tuesday 03 June 2025 15:11:15 +0000 (0:00:03.447) 0:00:28.193 ********** 2025-06-03 15:11:17.979803 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:17.982197 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:17.982239 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:17.982839 | orchestrator | 2025-06-03 15:11:17.984360 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 15:11:17.985247 | orchestrator | 2025-06-03 15:11:17.986100 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:11:17.987055 | orchestrator | Tuesday 03 June 2025 15:11:17 +0000 (0:00:02.121) 0:00:30.314 ********** 2025-06-03 15:11:21.823453 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:21.824651 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:21.824888 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:21.826322 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:21.826951 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:21.828508 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:21.830012 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:21.830463 | orchestrator | 2025-06-03 15:11:21.831115 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:11:21.832169 | orchestrator | 2025-06-03 15:11:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:11:21.832191 | orchestrator | 2025-06-03 15:11:21 | INFO  | Please wait and do not abort execution. 2025-06-03 15:11:21.832371 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:11:21.833087 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:11:21.833829 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:11:21.834150 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:11:21.834590 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:11:21.834993 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:11:21.835433 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:11:21.835906 | orchestrator | 2025-06-03 15:11:21.836256 | orchestrator | 2025-06-03 15:11:21.836680 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:11:21.837039 | orchestrator | Tuesday 03 June 2025 15:11:21 +0000 (0:00:03.845) 0:00:34.160 ********** 2025-06-03 15:11:21.837578 | orchestrator | =============================================================================== 2025-06-03 15:11:21.838164 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.29s 2025-06-03 15:11:21.838298 | orchestrator | Install required packages (Debian) -------------------------------------- 5.88s 2025-06-03 15:11:21.838919 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.85s 2025-06-03 15:11:21.839116 | orchestrator | Copy fact files --------------------------------------------------------- 3.45s 2025-06-03 15:11:21.839434 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 2.12s 2025-06-03 15:11:21.839870 | orchestrator | Create custom facts directory ------------------------------------------- 1.34s 2025-06-03 15:11:21.841050 | orchestrator | Copy fact file ---------------------------------------------------------- 1.13s 2025-06-03 15:11:21.841287 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.96s 2025-06-03 15:11:21.841714 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.92s 2025-06-03 15:11:21.842090 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-06-03 15:11:21.842589 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2025-06-03 15:11:21.843024 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.38s 2025-06-03 15:11:21.843498 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2025-06-03 15:11:21.843837 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-06-03 15:11:21.844348 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-06-03 15:11:21.845563 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-06-03 15:11:21.846011 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-06-03 15:11:21.846623 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-06-03 15:11:22.309021 | orchestrator | + osism apply bootstrap 2025-06-03 15:11:23.877564 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:11:23.877657 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:11:23.877673 | orchestrator | Registering Redlock._release_script 2025-06-03 15:11:23.944476 | orchestrator | 2025-06-03 15:11:23 | INFO  | Task b63b9aaa-b1f5-4623-9250-6de864248e62 (bootstrap) was prepared for execution. 2025-06-03 15:11:23.944561 | orchestrator | 2025-06-03 15:11:23 | INFO  | It takes a moment until task b63b9aaa-b1f5-4623-9250-6de864248e62 (bootstrap) has been started and output is visible here. 2025-06-03 15:11:27.813162 | orchestrator | 2025-06-03 15:11:27.815267 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-03 15:11:27.815307 | orchestrator | 2025-06-03 15:11:27.817252 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-03 15:11:27.817487 | orchestrator | Tuesday 03 June 2025 15:11:27 +0000 (0:00:00.158) 0:00:00.158 ********** 2025-06-03 15:11:27.884249 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:27.908214 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:27.936538 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:27.961148 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:28.031715 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:28.032081 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:28.032846 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:28.036638 | orchestrator | 2025-06-03 15:11:28.037319 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 15:11:28.038129 | orchestrator | 2025-06-03 15:11:28.038931 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:11:28.039562 | orchestrator | Tuesday 03 June 2025 15:11:28 +0000 (0:00:00.223) 0:00:00.381 ********** 2025-06-03 15:11:31.688176 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:31.688408 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:31.690915 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:31.692067 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:31.693457 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:31.694500 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:31.694935 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:31.696205 | orchestrator | 2025-06-03 15:11:31.697046 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-03 15:11:31.698174 | orchestrator | 2025-06-03 15:11:31.698508 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:11:31.699263 | orchestrator | Tuesday 03 June 2025 15:11:31 +0000 (0:00:03.655) 0:00:04.037 ********** 2025-06-03 15:11:31.775474 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-03 15:11:31.775626 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-03 15:11:31.796680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-03 15:11:31.796719 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-03 15:11:31.843737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:11:31.843868 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-03 15:11:31.844575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:11:31.845243 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-03 15:11:31.845570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:11:32.105702 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-03 15:11:32.106086 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-03 15:11:32.107277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-03 15:11:32.107564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-03 15:11:32.107960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-03 15:11:32.108490 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-03 15:11:32.109548 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-03 15:11:32.109577 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-03 15:11:32.109878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-03 15:11:32.110760 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-03 15:11:32.110810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-03 15:11:32.111105 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-03 15:11:32.111509 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-03 15:11:32.111965 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-03 15:11:32.112457 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:11:32.112918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-03 15:11:32.113468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-03 15:11:32.113998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-03 15:11:32.114692 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:32.114796 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-03 15:11:32.115080 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-03 15:11:32.115537 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-03 15:11:32.116026 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-03 15:11:32.116299 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-03 15:11:32.116728 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-03 15:11:32.117368 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-03 15:11:32.117902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:11:32.118645 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-03 15:11:32.118992 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-03 15:11:32.119738 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-03 15:11:32.120358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:11:32.121066 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-03 15:11:32.121917 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-03 15:11:32.123475 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-03 15:11:32.124227 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:11:32.124555 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:32.125094 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-03 15:11:32.125722 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-03 15:11:32.126347 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-03 15:11:32.126861 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:32.127185 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-03 15:11:32.127656 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:32.127995 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-03 15:11:32.128408 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:32.128812 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-03 15:11:32.129172 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-03 15:11:32.129496 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:32.129886 | orchestrator | 2025-06-03 15:11:32.130480 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-03 15:11:32.130725 | orchestrator | 2025-06-03 15:11:32.131537 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-03 15:11:32.131564 | orchestrator | Tuesday 03 June 2025 15:11:32 +0000 (0:00:00.416) 0:00:04.454 ********** 2025-06-03 15:11:33.267173 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:33.267713 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:33.268925 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:33.270708 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:33.271064 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:33.272016 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:33.272821 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:33.273531 | orchestrator | 2025-06-03 15:11:33.274802 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-03 15:11:33.275429 | orchestrator | Tuesday 03 June 2025 15:11:33 +0000 (0:00:01.162) 0:00:05.616 ********** 2025-06-03 15:11:34.371151 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:34.372103 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:34.372787 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:34.373696 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:34.374523 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:34.375022 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:34.375784 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:34.376381 | orchestrator | 2025-06-03 15:11:34.376746 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-03 15:11:34.377301 | orchestrator | Tuesday 03 June 2025 15:11:34 +0000 (0:00:01.102) 0:00:06.719 ********** 2025-06-03 15:11:34.580774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:11:34.581288 | orchestrator | 2025-06-03 15:11:34.581699 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-03 15:11:34.582581 | orchestrator | Tuesday 03 June 2025 15:11:34 +0000 (0:00:00.210) 0:00:06.930 ********** 2025-06-03 15:11:36.486862 | orchestrator | changed: [testbed-manager] 2025-06-03 15:11:36.487067 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:36.487089 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:36.487537 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:36.488258 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:36.489090 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:36.489774 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:36.492952 | orchestrator | 2025-06-03 15:11:36.492977 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-03 15:11:36.492990 | orchestrator | Tuesday 03 June 2025 15:11:36 +0000 (0:00:01.902) 0:00:08.833 ********** 2025-06-03 15:11:36.597445 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:11:36.780506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:11:36.781686 | orchestrator | 2025-06-03 15:11:36.782126 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-03 15:11:36.782964 | orchestrator | Tuesday 03 June 2025 15:11:36 +0000 (0:00:00.296) 0:00:09.129 ********** 2025-06-03 15:11:37.833167 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:37.834616 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:37.835883 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:37.836623 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:37.837680 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:37.838478 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:37.839699 | orchestrator | 2025-06-03 15:11:37.840759 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-03 15:11:37.841394 | orchestrator | Tuesday 03 June 2025 15:11:37 +0000 (0:00:01.051) 0:00:10.181 ********** 2025-06-03 15:11:37.903848 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:11:38.383699 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:38.383801 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:38.384014 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:38.384687 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:38.385036 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:38.385472 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:38.385930 | orchestrator | 2025-06-03 15:11:38.386746 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-03 15:11:38.387425 | orchestrator | Tuesday 03 June 2025 15:11:38 +0000 (0:00:00.551) 0:00:10.732 ********** 2025-06-03 15:11:38.496150 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:38.535741 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:38.553252 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:38.806164 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:38.807076 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:38.807915 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:38.808541 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:38.809272 | orchestrator | 2025-06-03 15:11:38.809801 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-03 15:11:38.810569 | orchestrator | Tuesday 03 June 2025 15:11:38 +0000 (0:00:00.423) 0:00:11.156 ********** 2025-06-03 15:11:38.902539 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:11:38.928228 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:38.964060 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:38.989005 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:39.059498 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:39.060860 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:39.061856 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:39.063107 | orchestrator | 2025-06-03 15:11:39.064784 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-03 15:11:39.065307 | orchestrator | Tuesday 03 June 2025 15:11:39 +0000 (0:00:00.252) 0:00:11.408 ********** 2025-06-03 15:11:39.367070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:11:39.368065 | orchestrator | 2025-06-03 15:11:39.369359 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-03 15:11:39.370002 | orchestrator | Tuesday 03 June 2025 15:11:39 +0000 (0:00:00.307) 0:00:11.715 ********** 2025-06-03 15:11:39.685578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:11:39.685760 | orchestrator | 2025-06-03 15:11:39.685895 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-03 15:11:39.686374 | orchestrator | Tuesday 03 June 2025 15:11:39 +0000 (0:00:00.316) 0:00:12.032 ********** 2025-06-03 15:11:40.984592 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:40.985529 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:40.985940 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:40.987273 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:40.988833 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:40.989680 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:40.990526 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:40.991391 | orchestrator | 2025-06-03 15:11:40.992134 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-03 15:11:40.992850 | orchestrator | Tuesday 03 June 2025 15:11:40 +0000 (0:00:01.290) 0:00:13.323 ********** 2025-06-03 15:11:41.053580 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:11:41.077991 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:41.106977 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:41.129169 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:41.186127 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:41.186859 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:41.187061 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:41.187979 | orchestrator | 2025-06-03 15:11:41.191243 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-03 15:11:41.191613 | orchestrator | Tuesday 03 June 2025 15:11:41 +0000 (0:00:00.212) 0:00:13.535 ********** 2025-06-03 15:11:41.716564 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:41.717128 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:41.719059 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:41.719702 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:41.720688 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:41.721768 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:41.723490 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:41.724020 | orchestrator | 2025-06-03 15:11:41.725111 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-03 15:11:41.726205 | orchestrator | Tuesday 03 June 2025 15:11:41 +0000 (0:00:00.528) 0:00:14.063 ********** 2025-06-03 15:11:41.810125 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:11:41.846164 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:41.871736 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:41.897971 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:41.966482 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:41.966570 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:41.967226 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:41.968151 | orchestrator | 2025-06-03 15:11:41.968917 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-03 15:11:41.970917 | orchestrator | Tuesday 03 June 2025 15:11:41 +0000 (0:00:00.251) 0:00:14.315 ********** 2025-06-03 15:11:42.507155 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:42.507439 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:42.508810 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:42.509732 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:42.510439 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:42.511209 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:42.512307 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:42.513224 | orchestrator | 2025-06-03 15:11:42.514261 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-03 15:11:42.514784 | orchestrator | Tuesday 03 June 2025 15:11:42 +0000 (0:00:00.540) 0:00:14.855 ********** 2025-06-03 15:11:43.609457 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:43.610494 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:43.611564 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:43.613168 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:43.615216 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:43.615799 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:43.616268 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:43.616777 | orchestrator | 2025-06-03 15:11:43.617281 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-03 15:11:43.617809 | orchestrator | Tuesday 03 June 2025 15:11:43 +0000 (0:00:01.100) 0:00:15.956 ********** 2025-06-03 15:11:45.675167 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:45.676014 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:45.677182 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:45.679427 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:45.679697 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:45.680798 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:45.681783 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:45.682619 | orchestrator | 2025-06-03 15:11:45.684273 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-03 15:11:45.684978 | orchestrator | Tuesday 03 June 2025 15:11:45 +0000 (0:00:02.066) 0:00:18.022 ********** 2025-06-03 15:11:46.036846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:11:46.037601 | orchestrator | 2025-06-03 15:11:46.039125 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-03 15:11:46.042120 | orchestrator | Tuesday 03 June 2025 15:11:46 +0000 (0:00:00.363) 0:00:18.386 ********** 2025-06-03 15:11:46.112350 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:11:47.384168 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:47.387418 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:47.387472 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:47.387485 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:47.387496 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:47.388044 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:47.388770 | orchestrator | 2025-06-03 15:11:47.389785 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-03 15:11:47.390614 | orchestrator | Tuesday 03 June 2025 15:11:47 +0000 (0:00:01.344) 0:00:19.730 ********** 2025-06-03 15:11:47.458306 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:47.485880 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:47.509398 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:47.533987 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:47.597066 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:47.597298 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:47.598001 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:47.598242 | orchestrator | 2025-06-03 15:11:47.598809 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-03 15:11:47.599212 | orchestrator | Tuesday 03 June 2025 15:11:47 +0000 (0:00:00.215) 0:00:19.946 ********** 2025-06-03 15:11:47.672579 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:47.721411 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:47.745294 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:47.806685 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:47.807694 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:47.809789 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:47.809950 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:47.811668 | orchestrator | 2025-06-03 15:11:47.813136 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-03 15:11:47.814250 | orchestrator | Tuesday 03 June 2025 15:11:47 +0000 (0:00:00.209) 0:00:20.156 ********** 2025-06-03 15:11:47.879291 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:47.905681 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:47.928932 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:47.952431 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:48.007892 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:48.009076 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:48.010853 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:48.011820 | orchestrator | 2025-06-03 15:11:48.012734 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-03 15:11:48.013589 | orchestrator | Tuesday 03 June 2025 15:11:48 +0000 (0:00:00.199) 0:00:20.356 ********** 2025-06-03 15:11:48.262809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:11:48.263782 | orchestrator | 2025-06-03 15:11:48.264880 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-03 15:11:48.266117 | orchestrator | Tuesday 03 June 2025 15:11:48 +0000 (0:00:00.255) 0:00:20.611 ********** 2025-06-03 15:11:48.781118 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:48.782584 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:48.782825 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:48.783672 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:48.784927 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:48.785963 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:48.786639 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:48.787997 | orchestrator | 2025-06-03 15:11:48.789502 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-03 15:11:48.789570 | orchestrator | Tuesday 03 June 2025 15:11:48 +0000 (0:00:00.517) 0:00:21.129 ********** 2025-06-03 15:11:48.860830 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:11:48.884205 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:48.907032 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:48.935237 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:48.992359 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:48.993105 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:48.993961 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:48.994671 | orchestrator | 2025-06-03 15:11:48.995393 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-03 15:11:48.995737 | orchestrator | Tuesday 03 June 2025 15:11:48 +0000 (0:00:00.212) 0:00:21.342 ********** 2025-06-03 15:11:50.092549 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:50.093072 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:50.094235 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:50.095022 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:50.096041 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:50.096552 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:50.097195 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:50.097916 | orchestrator | 2025-06-03 15:11:50.098608 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-03 15:11:50.099114 | orchestrator | Tuesday 03 June 2025 15:11:50 +0000 (0:00:01.098) 0:00:22.440 ********** 2025-06-03 15:11:50.647383 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:50.647554 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:50.650249 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:50.650380 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:50.650743 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:50.651751 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:50.651955 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:50.652840 | orchestrator | 2025-06-03 15:11:50.653473 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-03 15:11:50.654280 | orchestrator | Tuesday 03 June 2025 15:11:50 +0000 (0:00:00.555) 0:00:22.996 ********** 2025-06-03 15:11:51.752860 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:51.753603 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:51.754889 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:51.755733 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:51.757423 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:51.757780 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:51.759379 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:51.761189 | orchestrator | 2025-06-03 15:11:51.762059 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-03 15:11:51.762645 | orchestrator | Tuesday 03 June 2025 15:11:51 +0000 (0:00:01.104) 0:00:24.100 ********** 2025-06-03 15:12:05.506848 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:05.506986 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:05.507189 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:05.511386 | orchestrator | changed: [testbed-manager] 2025-06-03 15:12:05.511915 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:05.513125 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:05.514113 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:05.514720 | orchestrator | 2025-06-03 15:12:05.516273 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-03 15:12:05.517656 | orchestrator | Tuesday 03 June 2025 15:12:05 +0000 (0:00:13.751) 0:00:37.852 ********** 2025-06-03 15:12:05.575559 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:05.599460 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:05.634958 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:05.661400 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:05.717986 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:05.718525 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:05.719871 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:05.721986 | orchestrator | 2025-06-03 15:12:05.723127 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-03 15:12:05.723699 | orchestrator | Tuesday 03 June 2025 15:12:05 +0000 (0:00:00.213) 0:00:38.065 ********** 2025-06-03 15:12:05.795829 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:05.818779 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:05.852378 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:05.892298 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:05.974413 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:05.975118 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:05.975417 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:05.975665 | orchestrator | 2025-06-03 15:12:05.975835 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-03 15:12:05.976831 | orchestrator | Tuesday 03 June 2025 15:12:05 +0000 (0:00:00.256) 0:00:38.322 ********** 2025-06-03 15:12:06.076374 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:06.114500 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:06.143664 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:06.177796 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:06.245660 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:06.245812 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:06.245851 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:06.245922 | orchestrator | 2025-06-03 15:12:06.246071 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-03 15:12:06.246446 | orchestrator | Tuesday 03 June 2025 15:12:06 +0000 (0:00:00.273) 0:00:38.595 ********** 2025-06-03 15:12:06.540957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:12:06.541087 | orchestrator | 2025-06-03 15:12:06.541162 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-03 15:12:06.545039 | orchestrator | Tuesday 03 June 2025 15:12:06 +0000 (0:00:00.293) 0:00:38.888 ********** 2025-06-03 15:12:08.256488 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:08.256592 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:08.256878 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:08.257114 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:08.259807 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:08.260457 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:08.261062 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:08.261812 | orchestrator | 2025-06-03 15:12:08.262399 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-03 15:12:08.262884 | orchestrator | Tuesday 03 June 2025 15:12:08 +0000 (0:00:01.713) 0:00:40.602 ********** 2025-06-03 15:12:09.324659 | orchestrator | changed: [testbed-manager] 2025-06-03 15:12:09.325221 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:09.326097 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:09.327100 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:09.329787 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:09.330689 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:09.333219 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:09.335462 | orchestrator | 2025-06-03 15:12:09.336161 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-03 15:12:09.337369 | orchestrator | Tuesday 03 June 2025 15:12:09 +0000 (0:00:01.068) 0:00:41.670 ********** 2025-06-03 15:12:10.186354 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:10.189653 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:10.189738 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:10.189954 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:10.190735 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:10.192102 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:10.192573 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:10.192968 | orchestrator | 2025-06-03 15:12:10.194117 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-03 15:12:10.194819 | orchestrator | Tuesday 03 June 2025 15:12:10 +0000 (0:00:00.862) 0:00:42.533 ********** 2025-06-03 15:12:10.517717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:12:10.517839 | orchestrator | 2025-06-03 15:12:10.517957 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-03 15:12:10.518784 | orchestrator | Tuesday 03 June 2025 15:12:10 +0000 (0:00:00.330) 0:00:42.864 ********** 2025-06-03 15:12:11.542681 | orchestrator | changed: [testbed-manager] 2025-06-03 15:12:11.544591 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:11.544638 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:11.545267 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:11.546172 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:11.547031 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:11.547904 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:11.549309 | orchestrator | 2025-06-03 15:12:11.549923 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-03 15:12:11.551093 | orchestrator | Tuesday 03 June 2025 15:12:11 +0000 (0:00:01.026) 0:00:43.891 ********** 2025-06-03 15:12:11.614909 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:12:11.658130 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:12:11.680576 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:12:11.796478 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:12:11.796573 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:12:11.796642 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:12:11.797617 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:12:11.798237 | orchestrator | 2025-06-03 15:12:11.799091 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-03 15:12:11.799988 | orchestrator | Tuesday 03 June 2025 15:12:11 +0000 (0:00:00.254) 0:00:44.145 ********** 2025-06-03 15:12:23.045989 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:23.046900 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:23.046982 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:23.048142 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:23.049565 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:23.050573 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:23.051200 | orchestrator | changed: [testbed-manager] 2025-06-03 15:12:23.051811 | orchestrator | 2025-06-03 15:12:23.052294 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-03 15:12:23.052767 | orchestrator | Tuesday 03 June 2025 15:12:23 +0000 (0:00:11.246) 0:00:55.392 ********** 2025-06-03 15:12:24.658382 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:24.658479 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:24.658952 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:24.660744 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:24.661832 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:24.663623 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:24.664490 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:24.665426 | orchestrator | 2025-06-03 15:12:24.666448 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-03 15:12:24.667418 | orchestrator | Tuesday 03 June 2025 15:12:24 +0000 (0:00:01.609) 0:00:57.001 ********** 2025-06-03 15:12:25.594969 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:25.595474 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:25.596909 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:25.598563 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:25.599928 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:25.601029 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:25.602228 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:25.603003 | orchestrator | 2025-06-03 15:12:25.603954 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-03 15:12:25.604537 | orchestrator | Tuesday 03 June 2025 15:12:25 +0000 (0:00:00.940) 0:00:57.941 ********** 2025-06-03 15:12:25.684257 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:25.711046 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:25.743182 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:25.769623 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:25.836164 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:25.836954 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:25.839040 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:25.839064 | orchestrator | 2025-06-03 15:12:25.840241 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-03 15:12:25.840265 | orchestrator | Tuesday 03 June 2025 15:12:25 +0000 (0:00:00.243) 0:00:58.184 ********** 2025-06-03 15:12:25.915556 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:25.947483 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:25.970296 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:26.004171 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:26.077551 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:26.077622 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:26.078468 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:26.079158 | orchestrator | 2025-06-03 15:12:26.079999 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-03 15:12:26.080489 | orchestrator | Tuesday 03 June 2025 15:12:26 +0000 (0:00:00.241) 0:00:58.426 ********** 2025-06-03 15:12:26.384341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:12:26.387134 | orchestrator | 2025-06-03 15:12:26.387165 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-03 15:12:26.387178 | orchestrator | Tuesday 03 June 2025 15:12:26 +0000 (0:00:00.305) 0:00:58.731 ********** 2025-06-03 15:12:27.998251 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:27.999134 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:27.999190 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:27.999211 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:27.999230 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:27.999536 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:27.999746 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:28.000087 | orchestrator | 2025-06-03 15:12:28.000452 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-03 15:12:28.001029 | orchestrator | Tuesday 03 June 2025 15:12:27 +0000 (0:00:01.614) 0:01:00.346 ********** 2025-06-03 15:12:28.643486 | orchestrator | changed: [testbed-manager] 2025-06-03 15:12:28.643885 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:28.645006 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:28.645999 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:28.646966 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:28.647706 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:28.648134 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:28.649179 | orchestrator | 2025-06-03 15:12:28.649623 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-03 15:12:28.650405 | orchestrator | Tuesday 03 June 2025 15:12:28 +0000 (0:00:00.645) 0:01:00.991 ********** 2025-06-03 15:12:28.722223 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:28.750730 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:28.777301 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:28.807465 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:28.865653 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:28.868245 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:28.869172 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:28.869778 | orchestrator | 2025-06-03 15:12:28.870848 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-03 15:12:28.871441 | orchestrator | Tuesday 03 June 2025 15:12:28 +0000 (0:00:00.222) 0:01:01.214 ********** 2025-06-03 15:12:29.984987 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:29.985188 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:29.986897 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:29.987451 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:29.988211 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:29.988814 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:29.989596 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:29.990350 | orchestrator | 2025-06-03 15:12:29.991372 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-03 15:12:29.991863 | orchestrator | Tuesday 03 June 2025 15:12:29 +0000 (0:00:01.118) 0:01:02.332 ********** 2025-06-03 15:12:31.612639 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:31.612840 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:31.614570 | orchestrator | changed: [testbed-manager] 2025-06-03 15:12:31.617107 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:31.618192 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:31.618991 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:31.619901 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:31.620816 | orchestrator | 2025-06-03 15:12:31.621619 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-03 15:12:31.622563 | orchestrator | Tuesday 03 June 2025 15:12:31 +0000 (0:00:01.626) 0:01:03.959 ********** 2025-06-03 15:12:33.846548 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:33.846656 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:33.849161 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:33.850219 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:33.851540 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:33.852628 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:33.853819 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:33.854697 | orchestrator | 2025-06-03 15:12:33.855417 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-03 15:12:33.856383 | orchestrator | Tuesday 03 June 2025 15:12:33 +0000 (0:00:02.232) 0:01:06.191 ********** 2025-06-03 15:13:11.539274 | orchestrator | ok: [testbed-manager] 2025-06-03 15:13:11.539590 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:13:11.539615 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:13:11.540732 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:13:11.542586 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:13:11.543009 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:13:11.544262 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:13:11.545202 | orchestrator | 2025-06-03 15:13:11.546548 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-03 15:13:11.547927 | orchestrator | Tuesday 03 June 2025 15:13:11 +0000 (0:00:37.693) 0:01:43.885 ********** 2025-06-03 15:14:29.528522 | orchestrator | changed: [testbed-manager] 2025-06-03 15:14:29.528642 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:14:29.528658 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:14:29.528669 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:14:29.530334 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:14:29.531576 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:14:29.532682 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:14:29.533552 | orchestrator | 2025-06-03 15:14:29.534364 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-03 15:14:29.535135 | orchestrator | Tuesday 03 June 2025 15:14:29 +0000 (0:01:17.987) 0:03:01.872 ********** 2025-06-03 15:14:31.087228 | orchestrator | ok: [testbed-manager] 2025-06-03 15:14:31.087442 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:14:31.087698 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:14:31.088834 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:14:31.089577 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:14:31.090287 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:14:31.090918 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:14:31.091488 | orchestrator | 2025-06-03 15:14:31.092176 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-03 15:14:31.092708 | orchestrator | Tuesday 03 June 2025 15:14:31 +0000 (0:00:01.562) 0:03:03.435 ********** 2025-06-03 15:14:43.242247 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:14:43.242442 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:14:43.243653 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:14:43.243676 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:14:43.243687 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:14:43.244028 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:14:43.245630 | orchestrator | changed: [testbed-manager] 2025-06-03 15:14:43.246993 | orchestrator | 2025-06-03 15:14:43.248023 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-03 15:14:43.249106 | orchestrator | Tuesday 03 June 2025 15:14:43 +0000 (0:00:12.153) 0:03:15.589 ********** 2025-06-03 15:14:43.671286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-03 15:14:43.671892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-03 15:14:43.675999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-03 15:14:43.676036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-03 15:14:43.676507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-03 15:14:43.676701 | orchestrator | 2025-06-03 15:14:43.677383 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-03 15:14:43.677748 | orchestrator | Tuesday 03 June 2025 15:14:43 +0000 (0:00:00.429) 0:03:16.018 ********** 2025-06-03 15:14:43.724000 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-03 15:14:43.752528 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-03 15:14:43.753576 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:14:43.784746 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:14:43.784835 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-03 15:14:43.817821 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:14:43.817905 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-03 15:14:43.850591 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:14:44.382169 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:14:44.382960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:14:44.383381 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:14:44.384788 | orchestrator | 2025-06-03 15:14:44.384956 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-03 15:14:44.385426 | orchestrator | Tuesday 03 June 2025 15:14:44 +0000 (0:00:00.711) 0:03:16.730 ********** 2025-06-03 15:14:44.450992 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-03 15:14:44.451271 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-03 15:14:44.451755 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-03 15:14:44.452409 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-03 15:14:44.453055 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-03 15:14:44.453450 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-03 15:14:44.516460 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-03 15:14:44.516728 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-03 15:14:44.517823 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-03 15:14:44.518642 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-03 15:14:44.519526 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-03 15:14:44.520545 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-03 15:14:44.521701 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-03 15:14:44.522547 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-03 15:14:44.523494 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-03 15:14:44.523872 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-03 15:14:44.524427 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-03 15:14:44.524771 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-03 15:14:44.525449 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-03 15:14:44.525790 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-03 15:14:44.526279 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-03 15:14:44.526805 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-03 15:14:44.527465 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-03 15:14:44.527744 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-03 15:14:44.528483 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-03 15:14:44.528879 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-03 15:14:44.529467 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-03 15:14:44.529808 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-03 15:14:44.530359 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-03 15:14:44.530910 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-03 15:14:44.542220 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:14:44.581600 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:14:44.581706 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-03 15:14:44.581723 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-03 15:14:44.584656 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-03 15:14:44.591264 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-03 15:14:44.591408 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-03 15:14:44.592000 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-03 15:14:44.593928 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-03 15:14:44.593989 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-03 15:14:44.614914 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-03 15:14:44.615136 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:14:44.615852 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-03 15:14:48.190379 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:14:48.190558 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-03 15:14:48.193077 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-03 15:14:48.193983 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-03 15:14:48.194318 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-03 15:14:48.196819 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-03 15:14:48.197478 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-03 15:14:48.198101 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-03 15:14:48.199170 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-03 15:14:48.199613 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-03 15:14:48.200660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-03 15:14:48.200982 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-03 15:14:48.201390 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-03 15:14:48.202127 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-03 15:14:48.202538 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-03 15:14:48.202813 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-03 15:14:48.203267 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-03 15:14:48.204013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-03 15:14:48.204532 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-03 15:14:48.204857 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-03 15:14:48.205267 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-03 15:14:48.206003 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-03 15:14:48.206176 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-03 15:14:48.206538 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-03 15:14:48.206910 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-03 15:14:48.207360 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-03 15:14:48.207539 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-03 15:14:48.207957 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-03 15:14:48.208431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-03 15:14:48.208671 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-03 15:14:48.208963 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-03 15:14:48.209325 | orchestrator | 2025-06-03 15:14:48.209693 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-03 15:14:48.209981 | orchestrator | Tuesday 03 June 2025 15:14:48 +0000 (0:00:03.808) 0:03:20.538 ********** 2025-06-03 15:14:48.776159 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:14:48.777132 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:14:48.778138 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:14:48.780047 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:14:48.781449 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:14:48.781488 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:14:48.781500 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:14:48.781700 | orchestrator | 2025-06-03 15:14:48.782089 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-03 15:14:48.782781 | orchestrator | Tuesday 03 June 2025 15:14:48 +0000 (0:00:00.586) 0:03:21.125 ********** 2025-06-03 15:14:48.833774 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-03 15:14:48.862836 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:14:48.937591 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-03 15:14:49.253695 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-03 15:14:49.253991 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:14:49.256607 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:14:49.257480 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-03 15:14:49.257509 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:14:49.257570 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-03 15:14:49.258279 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-03 15:14:49.259072 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-03 15:14:49.260580 | orchestrator | 2025-06-03 15:14:49.260800 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-03 15:14:49.261493 | orchestrator | Tuesday 03 June 2025 15:14:49 +0000 (0:00:00.477) 0:03:21.602 ********** 2025-06-03 15:14:49.313154 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-03 15:14:49.339927 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:14:49.422832 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-03 15:14:49.820564 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:14:49.824831 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-03 15:14:49.824972 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:14:49.824992 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-03 15:14:49.826069 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:14:49.826584 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-03 15:14:49.827485 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-03 15:14:49.828156 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-03 15:14:49.828890 | orchestrator | 2025-06-03 15:14:49.829776 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-03 15:14:49.830514 | orchestrator | Tuesday 03 June 2025 15:14:49 +0000 (0:00:00.567) 0:03:22.169 ********** 2025-06-03 15:14:49.888829 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:14:49.914833 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:14:49.947765 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:14:49.972659 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:14:49.996050 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:14:50.140046 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:14:50.141079 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:14:50.141754 | orchestrator | 2025-06-03 15:14:50.142538 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-03 15:14:50.143648 | orchestrator | Tuesday 03 June 2025 15:14:50 +0000 (0:00:00.318) 0:03:22.488 ********** 2025-06-03 15:14:55.855169 | orchestrator | ok: [testbed-manager] 2025-06-03 15:14:55.855283 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:14:55.857207 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:14:55.857744 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:14:55.857914 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:14:55.858767 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:14:55.859351 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:14:55.859995 | orchestrator | 2025-06-03 15:14:55.860770 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-03 15:14:55.861616 | orchestrator | Tuesday 03 June 2025 15:14:55 +0000 (0:00:05.714) 0:03:28.202 ********** 2025-06-03 15:14:55.927504 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-03 15:14:55.968910 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:14:55.968980 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-03 15:14:55.969978 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-03 15:14:56.001699 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:14:56.046453 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:14:56.046610 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-03 15:14:56.081527 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:14:56.083871 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-03 15:14:56.085768 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-03 15:14:56.154297 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:14:56.155012 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:14:56.156722 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-03 15:14:56.157724 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:14:56.158101 | orchestrator | 2025-06-03 15:14:56.159074 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-03 15:14:56.159297 | orchestrator | Tuesday 03 June 2025 15:14:56 +0000 (0:00:00.301) 0:03:28.504 ********** 2025-06-03 15:14:57.166838 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-03 15:14:57.166970 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-03 15:14:57.169101 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-03 15:14:57.169160 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-03 15:14:57.169172 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-03 15:14:57.169183 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-03 15:14:57.169479 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-03 15:14:57.170241 | orchestrator | 2025-06-03 15:14:57.170887 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-03 15:14:57.171466 | orchestrator | Tuesday 03 June 2025 15:14:57 +0000 (0:00:01.008) 0:03:29.513 ********** 2025-06-03 15:14:57.692542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:14:57.693397 | orchestrator | 2025-06-03 15:14:57.695058 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-03 15:14:57.695084 | orchestrator | Tuesday 03 June 2025 15:14:57 +0000 (0:00:00.526) 0:03:30.039 ********** 2025-06-03 15:14:58.971170 | orchestrator | ok: [testbed-manager] 2025-06-03 15:14:58.971844 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:14:58.973926 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:14:58.975194 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:14:58.976174 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:14:58.977165 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:14:58.978105 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:14:58.978903 | orchestrator | 2025-06-03 15:14:58.979888 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-03 15:14:58.980543 | orchestrator | Tuesday 03 June 2025 15:14:58 +0000 (0:00:01.279) 0:03:31.319 ********** 2025-06-03 15:14:59.582005 | orchestrator | ok: [testbed-manager] 2025-06-03 15:14:59.585456 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:14:59.586257 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:14:59.589392 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:14:59.590155 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:14:59.591053 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:14:59.592933 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:14:59.593752 | orchestrator | 2025-06-03 15:14:59.594907 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-03 15:14:59.595536 | orchestrator | Tuesday 03 June 2025 15:14:59 +0000 (0:00:00.610) 0:03:31.929 ********** 2025-06-03 15:15:00.164216 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:00.164423 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:00.166359 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:00.166536 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:00.167817 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:00.168711 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:00.169423 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:00.170138 | orchestrator | 2025-06-03 15:15:00.170496 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-03 15:15:00.171043 | orchestrator | Tuesday 03 June 2025 15:15:00 +0000 (0:00:00.583) 0:03:32.513 ********** 2025-06-03 15:15:00.759022 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:00.759226 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:00.761876 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:00.763024 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:00.764386 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:00.765840 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:00.766981 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:00.767853 | orchestrator | 2025-06-03 15:15:00.768848 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-03 15:15:00.769209 | orchestrator | Tuesday 03 June 2025 15:15:00 +0000 (0:00:00.592) 0:03:33.105 ********** 2025-06-03 15:15:01.672161 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962388.1049492, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.672397 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962435.9166722, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.675344 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962431.3240678, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.676826 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962436.6540687, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.678483 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962462.2288325, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.679923 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962447.8478103, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.680724 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962459.7448153, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.681520 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962421.664609, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.682231 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962332.339093, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.683085 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962325.1422894, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.684083 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962334.2588224, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.684182 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962356.8909671, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.684793 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962350.7930763, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.685489 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962345.2863429, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:01.685814 | orchestrator | 2025-06-03 15:15:01.686267 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-03 15:15:01.686583 | orchestrator | Tuesday 03 June 2025 15:15:01 +0000 (0:00:00.914) 0:03:34.020 ********** 2025-06-03 15:15:02.782529 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:02.782639 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:02.782653 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:02.782757 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:02.783460 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:02.784458 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:02.785747 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:02.786556 | orchestrator | 2025-06-03 15:15:02.787514 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-03 15:15:02.788224 | orchestrator | Tuesday 03 June 2025 15:15:02 +0000 (0:00:01.108) 0:03:35.128 ********** 2025-06-03 15:15:03.953810 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:03.955857 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:03.956544 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:03.957061 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:03.958172 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:03.958812 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:03.959719 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:03.960080 | orchestrator | 2025-06-03 15:15:03.960862 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-03 15:15:03.961540 | orchestrator | Tuesday 03 June 2025 15:15:03 +0000 (0:00:01.172) 0:03:36.300 ********** 2025-06-03 15:15:05.038267 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:05.038436 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:05.038451 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:05.038533 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:05.039564 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:05.040448 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:05.040474 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:05.041022 | orchestrator | 2025-06-03 15:15:05.041692 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-03 15:15:05.042160 | orchestrator | Tuesday 03 June 2025 15:15:05 +0000 (0:00:01.085) 0:03:37.386 ********** 2025-06-03 15:15:05.122639 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:15:05.164719 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:15:05.200806 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:15:05.228041 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:15:05.281585 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:15:05.284576 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:15:05.285550 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:15:05.286798 | orchestrator | 2025-06-03 15:15:05.287450 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-03 15:15:05.288524 | orchestrator | Tuesday 03 June 2025 15:15:05 +0000 (0:00:00.243) 0:03:37.630 ********** 2025-06-03 15:15:05.923677 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:05.924840 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:05.929101 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:05.929151 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:05.929727 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:05.930457 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:05.931153 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:05.934678 | orchestrator | 2025-06-03 15:15:05.934749 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-03 15:15:05.935235 | orchestrator | Tuesday 03 June 2025 15:15:05 +0000 (0:00:00.641) 0:03:38.271 ********** 2025-06-03 15:15:06.288273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:15:06.290073 | orchestrator | 2025-06-03 15:15:06.290107 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-03 15:15:06.290609 | orchestrator | Tuesday 03 June 2025 15:15:06 +0000 (0:00:00.362) 0:03:38.633 ********** 2025-06-03 15:15:13.688908 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:13.689187 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:13.690641 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:13.691849 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:13.693443 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:13.694242 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:13.696730 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:13.700871 | orchestrator | 2025-06-03 15:15:13.701582 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-03 15:15:13.702083 | orchestrator | Tuesday 03 June 2025 15:15:13 +0000 (0:00:07.401) 0:03:46.035 ********** 2025-06-03 15:15:14.992877 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:14.992983 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:14.993547 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:14.993967 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:14.994843 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:14.995409 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:14.997929 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:14.998139 | orchestrator | 2025-06-03 15:15:14.998732 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-03 15:15:14.999561 | orchestrator | Tuesday 03 June 2025 15:15:14 +0000 (0:00:01.306) 0:03:47.341 ********** 2025-06-03 15:15:16.043072 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:16.043260 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:16.047129 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:16.047175 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:16.047187 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:16.048940 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:16.049874 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:16.050688 | orchestrator | 2025-06-03 15:15:16.052142 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-03 15:15:16.053074 | orchestrator | Tuesday 03 June 2025 15:15:16 +0000 (0:00:01.049) 0:03:48.390 ********** 2025-06-03 15:15:16.553855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:15:16.553962 | orchestrator | 2025-06-03 15:15:16.557149 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-03 15:15:16.557212 | orchestrator | Tuesday 03 June 2025 15:15:16 +0000 (0:00:00.510) 0:03:48.901 ********** 2025-06-03 15:15:24.843001 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:24.843120 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:24.844940 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:24.846882 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:24.847626 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:24.848745 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:24.849540 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:24.849905 | orchestrator | 2025-06-03 15:15:24.850794 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-03 15:15:24.851510 | orchestrator | Tuesday 03 June 2025 15:15:24 +0000 (0:00:08.288) 0:03:57.190 ********** 2025-06-03 15:15:25.518723 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:25.520514 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:25.521481 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:25.523170 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:25.523242 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:25.523869 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:25.524525 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:25.525152 | orchestrator | 2025-06-03 15:15:25.525933 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-03 15:15:25.526525 | orchestrator | Tuesday 03 June 2025 15:15:25 +0000 (0:00:00.677) 0:03:57.867 ********** 2025-06-03 15:15:26.637515 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:26.638718 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:26.639511 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:26.642470 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:26.643354 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:26.643552 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:26.644536 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:26.644756 | orchestrator | 2025-06-03 15:15:26.645478 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-03 15:15:26.645906 | orchestrator | Tuesday 03 June 2025 15:15:26 +0000 (0:00:01.116) 0:03:58.984 ********** 2025-06-03 15:15:27.735246 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:27.735675 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:27.736603 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:27.736638 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:27.737362 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:27.737710 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:27.738324 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:27.738717 | orchestrator | 2025-06-03 15:15:27.739355 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-03 15:15:27.739776 | orchestrator | Tuesday 03 June 2025 15:15:27 +0000 (0:00:01.092) 0:04:00.076 ********** 2025-06-03 15:15:27.835480 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:27.873926 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:27.906246 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:27.946963 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:28.024675 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:28.026540 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:28.027634 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:28.028271 | orchestrator | 2025-06-03 15:15:28.028778 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-03 15:15:28.029707 | orchestrator | Tuesday 03 June 2025 15:15:28 +0000 (0:00:00.298) 0:04:00.374 ********** 2025-06-03 15:15:28.147437 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:28.187409 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:28.225094 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:28.263708 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:28.344749 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:28.344906 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:28.345668 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:28.345908 | orchestrator | 2025-06-03 15:15:28.346852 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-03 15:15:28.349041 | orchestrator | Tuesday 03 June 2025 15:15:28 +0000 (0:00:00.319) 0:04:00.693 ********** 2025-06-03 15:15:28.471665 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:28.513043 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:28.545799 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:28.582606 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:28.688838 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:28.689863 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:28.692128 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:28.692618 | orchestrator | 2025-06-03 15:15:28.693974 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-03 15:15:28.695995 | orchestrator | Tuesday 03 June 2025 15:15:28 +0000 (0:00:00.343) 0:04:01.037 ********** 2025-06-03 15:15:34.450413 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:34.451786 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:34.452966 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:34.453836 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:34.455605 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:34.455817 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:34.456916 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:34.457676 | orchestrator | 2025-06-03 15:15:34.458704 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-03 15:15:34.459442 | orchestrator | Tuesday 03 June 2025 15:15:34 +0000 (0:00:05.759) 0:04:06.797 ********** 2025-06-03 15:15:34.896590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:15:34.900114 | orchestrator | 2025-06-03 15:15:34.900796 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-03 15:15:34.904500 | orchestrator | Tuesday 03 June 2025 15:15:34 +0000 (0:00:00.446) 0:04:07.244 ********** 2025-06-03 15:15:34.988835 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-03 15:15:34.989480 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-03 15:15:34.990243 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-03 15:15:35.037655 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:15:35.040857 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-03 15:15:35.114236 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-03 15:15:35.114357 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:15:35.116729 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-03 15:15:35.116770 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-03 15:15:35.117580 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-03 15:15:35.146262 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:15:35.193275 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:15:35.194002 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-03 15:15:35.197918 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-03 15:15:35.198257 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-03 15:15:35.266374 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-03 15:15:35.267023 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:15:35.267805 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:15:35.268254 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-03 15:15:35.269135 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-03 15:15:35.269439 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:15:35.271027 | orchestrator | 2025-06-03 15:15:35.271049 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-03 15:15:35.272005 | orchestrator | Tuesday 03 June 2025 15:15:35 +0000 (0:00:00.372) 0:04:07.616 ********** 2025-06-03 15:15:35.665520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:15:35.666717 | orchestrator | 2025-06-03 15:15:35.672772 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-03 15:15:35.672810 | orchestrator | Tuesday 03 June 2025 15:15:35 +0000 (0:00:00.397) 0:04:08.013 ********** 2025-06-03 15:15:35.703647 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-03 15:15:35.782143 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:15:35.782276 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-03 15:15:35.783011 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-03 15:15:35.819253 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:15:35.820357 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-03 15:15:35.853693 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:15:35.896764 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:15:35.896913 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-03 15:15:35.978855 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:15:35.979899 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-03 15:15:35.981340 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:15:35.982118 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-03 15:15:35.983114 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:15:35.983851 | orchestrator | 2025-06-03 15:15:35.985140 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-03 15:15:35.986594 | orchestrator | Tuesday 03 June 2025 15:15:35 +0000 (0:00:00.314) 0:04:08.328 ********** 2025-06-03 15:15:36.528278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:15:36.529754 | orchestrator | 2025-06-03 15:15:36.530880 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-03 15:15:36.532756 | orchestrator | Tuesday 03 June 2025 15:15:36 +0000 (0:00:00.543) 0:04:08.871 ********** 2025-06-03 15:16:10.077848 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:10.077964 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:10.077979 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:10.077991 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:10.078002 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:10.078012 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:10.078074 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:10.078087 | orchestrator | 2025-06-03 15:16:10.078456 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-03 15:16:10.079932 | orchestrator | Tuesday 03 June 2025 15:16:10 +0000 (0:00:33.548) 0:04:42.420 ********** 2025-06-03 15:16:18.534952 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:18.535184 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:18.536345 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:18.537264 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:18.540011 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:18.540097 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:18.541862 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:18.544010 | orchestrator | 2025-06-03 15:16:18.544667 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-03 15:16:18.545076 | orchestrator | Tuesday 03 June 2025 15:16:18 +0000 (0:00:08.462) 0:04:50.882 ********** 2025-06-03 15:16:26.108352 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:26.109079 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:26.109433 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:26.110900 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:26.112610 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:26.113179 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:26.114443 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:26.114632 | orchestrator | 2025-06-03 15:16:26.115624 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-03 15:16:26.116447 | orchestrator | Tuesday 03 June 2025 15:16:26 +0000 (0:00:07.573) 0:04:58.455 ********** 2025-06-03 15:16:27.892866 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:27.893009 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:16:27.893027 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:16:27.893070 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:16:27.893412 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:16:27.895784 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:16:27.895834 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:16:27.895851 | orchestrator | 2025-06-03 15:16:27.895872 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-03 15:16:27.895893 | orchestrator | Tuesday 03 June 2025 15:16:27 +0000 (0:00:01.783) 0:05:00.239 ********** 2025-06-03 15:16:33.673872 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:33.674955 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:33.674991 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:33.675003 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:33.675057 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:33.675943 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:33.676988 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:33.677538 | orchestrator | 2025-06-03 15:16:33.679587 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-03 15:16:33.680711 | orchestrator | Tuesday 03 June 2025 15:16:33 +0000 (0:00:05.778) 0:05:06.017 ********** 2025-06-03 15:16:34.104156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:16:34.104235 | orchestrator | 2025-06-03 15:16:34.105141 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-03 15:16:34.105714 | orchestrator | Tuesday 03 June 2025 15:16:34 +0000 (0:00:00.435) 0:05:06.453 ********** 2025-06-03 15:16:34.981369 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:34.984062 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:34.984124 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:34.984143 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:34.985388 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:34.987164 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:34.987968 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:34.988541 | orchestrator | 2025-06-03 15:16:34.989260 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-03 15:16:34.990424 | orchestrator | Tuesday 03 June 2025 15:16:34 +0000 (0:00:00.874) 0:05:07.327 ********** 2025-06-03 15:16:36.600908 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:36.602948 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:16:36.604066 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:16:36.604617 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:16:36.607471 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:16:36.607497 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:16:36.608076 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:16:36.609267 | orchestrator | 2025-06-03 15:16:36.610701 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-03 15:16:36.611345 | orchestrator | Tuesday 03 June 2025 15:16:36 +0000 (0:00:01.620) 0:05:08.947 ********** 2025-06-03 15:16:37.473021 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:37.474580 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:37.478363 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:37.478392 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:37.478403 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:37.478910 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:37.479449 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:37.480443 | orchestrator | 2025-06-03 15:16:37.481864 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-03 15:16:37.487039 | orchestrator | Tuesday 03 June 2025 15:16:37 +0000 (0:00:00.872) 0:05:09.820 ********** 2025-06-03 15:16:37.708011 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:16:37.758388 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:16:37.810806 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:16:37.849202 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:16:37.887728 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:16:37.959274 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:16:37.959796 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:16:37.960941 | orchestrator | 2025-06-03 15:16:37.961664 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-03 15:16:37.961994 | orchestrator | Tuesday 03 June 2025 15:16:37 +0000 (0:00:00.486) 0:05:10.307 ********** 2025-06-03 15:16:38.040986 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:16:38.119924 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:16:38.157602 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:16:38.193763 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:16:38.420389 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:16:38.421020 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:16:38.422754 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:16:38.424020 | orchestrator | 2025-06-03 15:16:38.425350 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-03 15:16:38.425888 | orchestrator | Tuesday 03 June 2025 15:16:38 +0000 (0:00:00.460) 0:05:10.768 ********** 2025-06-03 15:16:38.541134 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:38.578895 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:16:38.616857 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:16:38.654973 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:16:38.760364 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:16:38.764188 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:16:38.764228 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:16:38.765828 | orchestrator | 2025-06-03 15:16:38.766576 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-03 15:16:38.767858 | orchestrator | Tuesday 03 June 2025 15:16:38 +0000 (0:00:00.340) 0:05:11.109 ********** 2025-06-03 15:16:38.878619 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:16:38.916253 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:16:38.950011 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:16:38.981088 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:16:39.057099 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:16:39.059698 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:16:39.060541 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:16:39.062561 | orchestrator | 2025-06-03 15:16:39.062683 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-03 15:16:39.064195 | orchestrator | Tuesday 03 June 2025 15:16:39 +0000 (0:00:00.295) 0:05:11.405 ********** 2025-06-03 15:16:39.170198 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:39.211337 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:16:39.269941 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:16:39.307209 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:16:39.388601 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:16:39.389255 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:16:39.393701 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:16:39.393760 | orchestrator | 2025-06-03 15:16:39.393769 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-03 15:16:39.393778 | orchestrator | Tuesday 03 June 2025 15:16:39 +0000 (0:00:00.330) 0:05:11.736 ********** 2025-06-03 15:16:39.512212 | orchestrator | ok: [testbed-manager] =>  2025-06-03 15:16:39.512896 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:16:39.565793 | orchestrator | ok: [testbed-node-3] =>  2025-06-03 15:16:39.565895 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:16:39.601001 | orchestrator | ok: [testbed-node-4] =>  2025-06-03 15:16:39.601098 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:16:39.646192 | orchestrator | ok: [testbed-node-5] =>  2025-06-03 15:16:39.647172 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:16:39.719192 | orchestrator | ok: [testbed-node-0] =>  2025-06-03 15:16:39.722156 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:16:39.723858 | orchestrator | ok: [testbed-node-1] =>  2025-06-03 15:16:39.725534 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:16:39.727957 | orchestrator | ok: [testbed-node-2] =>  2025-06-03 15:16:39.728152 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:16:39.728172 | orchestrator | 2025-06-03 15:16:39.728191 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-03 15:16:39.728338 | orchestrator | Tuesday 03 June 2025 15:16:39 +0000 (0:00:00.332) 0:05:12.068 ********** 2025-06-03 15:16:39.854479 | orchestrator | ok: [testbed-manager] =>  2025-06-03 15:16:39.854665 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:16:40.021206 | orchestrator | ok: [testbed-node-3] =>  2025-06-03 15:16:40.021480 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:16:40.062806 | orchestrator | ok: [testbed-node-4] =>  2025-06-03 15:16:40.066495 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:16:40.099146 | orchestrator | ok: [testbed-node-5] =>  2025-06-03 15:16:40.104026 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:16:40.187758 | orchestrator | ok: [testbed-node-0] =>  2025-06-03 15:16:40.188199 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:16:40.189650 | orchestrator | ok: [testbed-node-1] =>  2025-06-03 15:16:40.190693 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:16:40.191922 | orchestrator | ok: [testbed-node-2] =>  2025-06-03 15:16:40.192540 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:16:40.198843 | orchestrator | 2025-06-03 15:16:40.198886 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-03 15:16:40.198895 | orchestrator | Tuesday 03 June 2025 15:16:40 +0000 (0:00:00.467) 0:05:12.536 ********** 2025-06-03 15:16:40.265768 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:16:40.336956 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:16:40.383927 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:16:40.417509 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:16:40.486918 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:16:40.488184 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:16:40.489880 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:16:40.490560 | orchestrator | 2025-06-03 15:16:40.491682 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-03 15:16:40.493577 | orchestrator | Tuesday 03 June 2025 15:16:40 +0000 (0:00:00.297) 0:05:12.834 ********** 2025-06-03 15:16:40.580836 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:16:40.619164 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:16:40.658775 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:16:40.692371 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:16:40.727435 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:16:40.799830 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:16:40.800663 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:16:40.801912 | orchestrator | 2025-06-03 15:16:40.803684 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-03 15:16:40.803849 | orchestrator | Tuesday 03 June 2025 15:16:40 +0000 (0:00:00.313) 0:05:13.148 ********** 2025-06-03 15:16:41.237510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:16:41.238199 | orchestrator | 2025-06-03 15:16:41.239870 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-03 15:16:41.240462 | orchestrator | Tuesday 03 June 2025 15:16:41 +0000 (0:00:00.437) 0:05:13.585 ********** 2025-06-03 15:16:42.150909 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:42.151996 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:16:42.153586 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:16:42.154452 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:16:42.155927 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:16:42.156555 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:16:42.157109 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:16:42.158162 | orchestrator | 2025-06-03 15:16:42.159497 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-03 15:16:42.160238 | orchestrator | Tuesday 03 June 2025 15:16:42 +0000 (0:00:00.912) 0:05:14.497 ********** 2025-06-03 15:16:44.909708 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:16:44.910140 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:16:44.911191 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:16:44.914009 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:16:44.914821 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:16:44.916346 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:16:44.917629 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:44.918467 | orchestrator | 2025-06-03 15:16:44.919327 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-03 15:16:44.920326 | orchestrator | Tuesday 03 June 2025 15:16:44 +0000 (0:00:02.758) 0:05:17.256 ********** 2025-06-03 15:16:44.986522 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-03 15:16:44.987190 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-03 15:16:45.066586 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-03 15:16:45.066688 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-03 15:16:45.067605 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-03 15:16:45.141575 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-03 15:16:45.143092 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:16:45.145742 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-03 15:16:45.151667 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-03 15:16:45.152596 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-03 15:16:45.392246 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:16:45.393438 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-03 15:16:45.395406 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-03 15:16:45.399938 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-03 15:16:45.468901 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:16:45.471135 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-03 15:16:45.473475 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-03 15:16:45.473818 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-03 15:16:45.541716 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:16:45.541885 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-03 15:16:45.542068 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-03 15:16:45.542840 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-03 15:16:45.770815 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:16:45.771481 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:16:45.773897 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-03 15:16:45.774918 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-03 15:16:45.776127 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-03 15:16:45.777152 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:16:45.779735 | orchestrator | 2025-06-03 15:16:45.780412 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-03 15:16:45.781624 | orchestrator | Tuesday 03 June 2025 15:16:45 +0000 (0:00:00.861) 0:05:18.118 ********** 2025-06-03 15:16:51.693555 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:51.693672 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:51.693752 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:51.696143 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:51.696772 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:51.697747 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:51.699005 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:51.699689 | orchestrator | 2025-06-03 15:16:51.700459 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-03 15:16:51.702331 | orchestrator | Tuesday 03 June 2025 15:16:51 +0000 (0:00:05.918) 0:05:24.036 ********** 2025-06-03 15:16:52.769037 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:52.769941 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:52.771419 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:52.772431 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:52.773515 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:52.774393 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:52.775241 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:52.775679 | orchestrator | 2025-06-03 15:16:52.776519 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-03 15:16:52.777183 | orchestrator | Tuesday 03 June 2025 15:16:52 +0000 (0:00:01.078) 0:05:25.115 ********** 2025-06-03 15:17:01.061909 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:01.065191 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:01.066122 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:01.067041 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:01.068241 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:01.069062 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:01.070111 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:01.070457 | orchestrator | 2025-06-03 15:17:01.071582 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-03 15:17:01.072553 | orchestrator | Tuesday 03 June 2025 15:17:01 +0000 (0:00:08.293) 0:05:33.408 ********** 2025-06-03 15:17:04.355972 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:04.356106 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:04.362200 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:04.362359 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:04.362383 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:04.362396 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:04.362418 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:04.362435 | orchestrator | 2025-06-03 15:17:04.362456 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-03 15:17:04.362478 | orchestrator | Tuesday 03 June 2025 15:17:04 +0000 (0:00:03.292) 0:05:36.700 ********** 2025-06-03 15:17:05.928464 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:05.929100 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:05.929852 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:05.930950 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:05.932013 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:05.932038 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:05.932043 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:05.933721 | orchestrator | 2025-06-03 15:17:05.934829 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-03 15:17:05.934863 | orchestrator | Tuesday 03 June 2025 15:17:05 +0000 (0:00:01.573) 0:05:38.273 ********** 2025-06-03 15:17:07.242242 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:07.242560 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:07.242880 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:07.244602 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:07.245388 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:07.245852 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:07.246530 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:07.247259 | orchestrator | 2025-06-03 15:17:07.247897 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-03 15:17:07.248354 | orchestrator | Tuesday 03 June 2025 15:17:07 +0000 (0:00:01.316) 0:05:39.590 ********** 2025-06-03 15:17:07.453114 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:07.521169 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:07.586343 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:07.664986 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:07.873077 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:07.873193 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:07.874692 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:07.874708 | orchestrator | 2025-06-03 15:17:07.875441 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-03 15:17:07.875949 | orchestrator | Tuesday 03 June 2025 15:17:07 +0000 (0:00:00.628) 0:05:40.218 ********** 2025-06-03 15:17:17.324746 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:17.326562 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:17.327204 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:17.329740 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:17.330100 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:17.331351 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:17.332451 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:17.333002 | orchestrator | 2025-06-03 15:17:17.334076 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-03 15:17:17.334742 | orchestrator | Tuesday 03 June 2025 15:17:17 +0000 (0:00:09.452) 0:05:49.671 ********** 2025-06-03 15:17:18.247657 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:18.247762 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:18.248992 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:18.250221 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:18.251019 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:18.251743 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:18.252429 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:18.253256 | orchestrator | 2025-06-03 15:17:18.253987 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-03 15:17:18.254753 | orchestrator | Tuesday 03 June 2025 15:17:18 +0000 (0:00:00.924) 0:05:50.595 ********** 2025-06-03 15:17:27.101717 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:27.101800 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:27.102056 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:27.103167 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:27.104426 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:27.105670 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:27.106246 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:27.107514 | orchestrator | 2025-06-03 15:17:27.107889 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-03 15:17:27.108759 | orchestrator | Tuesday 03 June 2025 15:17:27 +0000 (0:00:08.854) 0:05:59.450 ********** 2025-06-03 15:17:37.760288 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:37.760404 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:37.760419 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:37.760514 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:37.764435 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:37.767837 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:37.767895 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:37.767905 | orchestrator | 2025-06-03 15:17:37.767914 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-03 15:17:37.768963 | orchestrator | Tuesday 03 June 2025 15:17:37 +0000 (0:00:10.653) 0:06:10.103 ********** 2025-06-03 15:17:38.151859 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-03 15:17:38.241132 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-03 15:17:39.105877 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-03 15:17:39.106242 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-03 15:17:39.106455 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-03 15:17:39.107656 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-03 15:17:39.108800 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-03 15:17:39.109374 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-03 15:17:39.110239 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-03 15:17:39.111208 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-03 15:17:39.111837 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-03 15:17:39.112927 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-03 15:17:39.113990 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-03 15:17:39.114455 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-03 15:17:39.115235 | orchestrator | 2025-06-03 15:17:39.116048 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-03 15:17:39.116758 | orchestrator | Tuesday 03 June 2025 15:17:39 +0000 (0:00:01.349) 0:06:11.452 ********** 2025-06-03 15:17:39.242778 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:39.305792 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:39.378155 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:39.440934 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:39.505996 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:39.626907 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:39.627066 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:39.627416 | orchestrator | 2025-06-03 15:17:39.627871 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-03 15:17:39.628447 | orchestrator | Tuesday 03 June 2025 15:17:39 +0000 (0:00:00.521) 0:06:11.974 ********** 2025-06-03 15:17:43.426465 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:43.428653 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:43.428692 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:43.429713 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:43.430247 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:43.430799 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:43.432158 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:43.432224 | orchestrator | 2025-06-03 15:17:43.432993 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-03 15:17:43.433926 | orchestrator | Tuesday 03 June 2025 15:17:43 +0000 (0:00:03.794) 0:06:15.768 ********** 2025-06-03 15:17:43.578981 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:43.645502 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:43.717372 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:43.789018 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:43.851784 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:43.960864 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:43.961376 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:43.964434 | orchestrator | 2025-06-03 15:17:43.964982 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-03 15:17:43.966302 | orchestrator | Tuesday 03 June 2025 15:17:43 +0000 (0:00:00.538) 0:06:16.307 ********** 2025-06-03 15:17:44.056103 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-03 15:17:44.056980 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-03 15:17:44.135856 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:44.135954 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-03 15:17:44.136498 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-03 15:17:44.205848 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:44.207393 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-03 15:17:44.208437 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-03 15:17:44.278822 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:44.279941 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-03 15:17:44.281214 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-03 15:17:44.348597 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:44.349278 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-03 15:17:44.351049 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-03 15:17:44.416739 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:44.417718 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-03 15:17:44.418522 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-03 15:17:44.531626 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:44.531811 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-03 15:17:44.533468 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-03 15:17:44.535025 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:44.537164 | orchestrator | 2025-06-03 15:17:44.538061 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-03 15:17:44.539275 | orchestrator | Tuesday 03 June 2025 15:17:44 +0000 (0:00:00.572) 0:06:16.879 ********** 2025-06-03 15:17:44.666873 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:44.739740 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:44.803808 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:44.867859 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:44.940149 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:45.057058 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:45.057766 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:45.058480 | orchestrator | 2025-06-03 15:17:45.058950 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-03 15:17:45.059957 | orchestrator | Tuesday 03 June 2025 15:17:45 +0000 (0:00:00.525) 0:06:17.404 ********** 2025-06-03 15:17:45.199892 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:45.261418 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:45.325221 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:45.397605 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:45.461651 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:45.564655 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:45.564761 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:45.565944 | orchestrator | 2025-06-03 15:17:45.567747 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-03 15:17:45.569106 | orchestrator | Tuesday 03 June 2025 15:17:45 +0000 (0:00:00.506) 0:06:17.911 ********** 2025-06-03 15:17:45.710468 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:45.774391 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:46.035200 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:46.106293 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:46.170239 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:46.286633 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:46.286732 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:46.287635 | orchestrator | 2025-06-03 15:17:46.287911 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-03 15:17:46.288715 | orchestrator | Tuesday 03 June 2025 15:17:46 +0000 (0:00:00.723) 0:06:18.634 ********** 2025-06-03 15:17:47.975063 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:47.975180 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:17:47.976666 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:17:47.978381 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:17:47.978742 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:17:47.980208 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:17:47.981140 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:17:47.981452 | orchestrator | 2025-06-03 15:17:47.982503 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-03 15:17:47.983662 | orchestrator | Tuesday 03 June 2025 15:17:47 +0000 (0:00:01.686) 0:06:20.320 ********** 2025-06-03 15:17:48.869720 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:17:48.870139 | orchestrator | 2025-06-03 15:17:48.871434 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-03 15:17:48.872809 | orchestrator | Tuesday 03 June 2025 15:17:48 +0000 (0:00:00.895) 0:06:21.216 ********** 2025-06-03 15:17:49.751996 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:49.752710 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:49.753995 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:49.756030 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:49.756856 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:49.757425 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:49.757959 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:49.759109 | orchestrator | 2025-06-03 15:17:49.759893 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-03 15:17:49.761386 | orchestrator | Tuesday 03 June 2025 15:17:49 +0000 (0:00:00.880) 0:06:22.097 ********** 2025-06-03 15:17:50.195567 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:50.265200 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:50.863896 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:50.864697 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:50.866796 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:50.866842 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:50.866854 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:50.867684 | orchestrator | 2025-06-03 15:17:50.867829 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-03 15:17:50.868755 | orchestrator | Tuesday 03 June 2025 15:17:50 +0000 (0:00:01.112) 0:06:23.209 ********** 2025-06-03 15:17:52.350357 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:52.350496 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:52.350524 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:52.350635 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:52.351109 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:52.351665 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:52.353243 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:52.354184 | orchestrator | 2025-06-03 15:17:52.354217 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-03 15:17:52.354229 | orchestrator | Tuesday 03 June 2025 15:17:52 +0000 (0:00:01.482) 0:06:24.691 ********** 2025-06-03 15:17:52.490337 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:53.718422 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:17:53.718589 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:17:53.719082 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:17:53.720107 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:17:53.720932 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:17:53.721510 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:17:53.722142 | orchestrator | 2025-06-03 15:17:53.722934 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-03 15:17:53.724069 | orchestrator | Tuesday 03 June 2025 15:17:53 +0000 (0:00:01.371) 0:06:26.063 ********** 2025-06-03 15:17:55.076993 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:55.077134 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:55.077309 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:55.080412 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:55.080500 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:55.081120 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:55.082554 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:55.083919 | orchestrator | 2025-06-03 15:17:55.084351 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-03 15:17:55.085377 | orchestrator | Tuesday 03 June 2025 15:17:55 +0000 (0:00:01.361) 0:06:27.424 ********** 2025-06-03 15:17:56.685193 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:56.685420 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:56.689188 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:56.691306 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:56.692329 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:56.693917 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:56.695512 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:56.696572 | orchestrator | 2025-06-03 15:17:56.697700 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-03 15:17:56.699398 | orchestrator | Tuesday 03 June 2025 15:17:56 +0000 (0:00:01.606) 0:06:29.031 ********** 2025-06-03 15:17:57.522855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:17:57.523794 | orchestrator | 2025-06-03 15:17:57.526605 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-03 15:17:57.528212 | orchestrator | Tuesday 03 June 2025 15:17:57 +0000 (0:00:00.838) 0:06:29.869 ********** 2025-06-03 15:17:58.881439 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:58.882328 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:17:58.883039 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:17:58.884925 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:17:58.885540 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:17:58.888907 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:17:58.888954 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:17:58.888974 | orchestrator | 2025-06-03 15:17:58.888994 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-03 15:17:58.889007 | orchestrator | Tuesday 03 June 2025 15:17:58 +0000 (0:00:01.359) 0:06:31.228 ********** 2025-06-03 15:18:00.070425 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:00.070534 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:00.071694 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:00.072654 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:00.073310 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:00.074144 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:00.074997 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:00.075487 | orchestrator | 2025-06-03 15:18:00.076209 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-03 15:18:00.077006 | orchestrator | Tuesday 03 June 2025 15:18:00 +0000 (0:00:01.187) 0:06:32.416 ********** 2025-06-03 15:18:01.483511 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:01.483625 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:01.483643 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:01.483720 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:01.484488 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:01.484729 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:01.485310 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:01.485469 | orchestrator | 2025-06-03 15:18:01.485993 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-03 15:18:01.486559 | orchestrator | Tuesday 03 June 2025 15:18:01 +0000 (0:00:01.413) 0:06:33.829 ********** 2025-06-03 15:18:02.625187 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:02.626929 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:02.627769 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:02.628848 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:02.629673 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:02.630579 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:02.631072 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:02.631897 | orchestrator | 2025-06-03 15:18:02.632700 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-03 15:18:02.633602 | orchestrator | Tuesday 03 June 2025 15:18:02 +0000 (0:00:01.141) 0:06:34.970 ********** 2025-06-03 15:18:03.840985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:18:03.841357 | orchestrator | 2025-06-03 15:18:03.842404 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:03.845696 | orchestrator | Tuesday 03 June 2025 15:18:03 +0000 (0:00:00.929) 0:06:35.900 ********** 2025-06-03 15:18:03.846487 | orchestrator | 2025-06-03 15:18:03.847476 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:03.848468 | orchestrator | Tuesday 03 June 2025 15:18:03 +0000 (0:00:00.039) 0:06:35.939 ********** 2025-06-03 15:18:03.849328 | orchestrator | 2025-06-03 15:18:03.850606 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:03.854405 | orchestrator | Tuesday 03 June 2025 15:18:03 +0000 (0:00:00.046) 0:06:35.986 ********** 2025-06-03 15:18:03.855180 | orchestrator | 2025-06-03 15:18:03.855713 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:03.856395 | orchestrator | Tuesday 03 June 2025 15:18:03 +0000 (0:00:00.039) 0:06:36.025 ********** 2025-06-03 15:18:03.857084 | orchestrator | 2025-06-03 15:18:03.857709 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:03.858418 | orchestrator | Tuesday 03 June 2025 15:18:03 +0000 (0:00:00.038) 0:06:36.064 ********** 2025-06-03 15:18:03.858920 | orchestrator | 2025-06-03 15:18:03.859654 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:03.860066 | orchestrator | Tuesday 03 June 2025 15:18:03 +0000 (0:00:00.045) 0:06:36.110 ********** 2025-06-03 15:18:03.860719 | orchestrator | 2025-06-03 15:18:03.861169 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:03.865855 | orchestrator | Tuesday 03 June 2025 15:18:03 +0000 (0:00:00.038) 0:06:36.149 ********** 2025-06-03 15:18:03.866782 | orchestrator | 2025-06-03 15:18:03.867520 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-03 15:18:03.868356 | orchestrator | Tuesday 03 June 2025 15:18:03 +0000 (0:00:00.038) 0:06:36.187 ********** 2025-06-03 15:18:05.175661 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:05.175832 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:05.176105 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:05.176543 | orchestrator | 2025-06-03 15:18:05.177326 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-03 15:18:05.177900 | orchestrator | Tuesday 03 June 2025 15:18:05 +0000 (0:00:01.334) 0:06:37.522 ********** 2025-06-03 15:18:06.478750 | orchestrator | changed: [testbed-manager] 2025-06-03 15:18:06.478866 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:06.479344 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:06.480643 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:06.481390 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:06.482485 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:06.483418 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:06.484295 | orchestrator | 2025-06-03 15:18:06.485041 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-03 15:18:06.485847 | orchestrator | Tuesday 03 June 2025 15:18:06 +0000 (0:00:01.304) 0:06:38.826 ********** 2025-06-03 15:18:07.642107 | orchestrator | changed: [testbed-manager] 2025-06-03 15:18:07.642218 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:07.642718 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:07.647038 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:07.648388 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:07.648627 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:07.650489 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:07.651873 | orchestrator | 2025-06-03 15:18:07.653561 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-03 15:18:07.654336 | orchestrator | Tuesday 03 June 2025 15:18:07 +0000 (0:00:01.160) 0:06:39.987 ********** 2025-06-03 15:18:07.782220 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:10.240673 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:10.241779 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:10.242458 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:10.244278 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:10.245141 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:10.245907 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:10.246976 | orchestrator | 2025-06-03 15:18:10.247834 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-03 15:18:10.248405 | orchestrator | Tuesday 03 June 2025 15:18:10 +0000 (0:00:02.598) 0:06:42.586 ********** 2025-06-03 15:18:10.352701 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:10.352775 | orchestrator | 2025-06-03 15:18:10.354496 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-03 15:18:10.356710 | orchestrator | Tuesday 03 June 2025 15:18:10 +0000 (0:00:00.112) 0:06:42.699 ********** 2025-06-03 15:18:11.349807 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:11.350190 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:11.351986 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:11.352622 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:11.353006 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:11.354375 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:11.355956 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:11.356890 | orchestrator | 2025-06-03 15:18:11.358003 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-03 15:18:11.358231 | orchestrator | Tuesday 03 June 2025 15:18:11 +0000 (0:00:00.996) 0:06:43.695 ********** 2025-06-03 15:18:11.688660 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:11.753756 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:11.830510 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:11.900218 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:11.970214 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:12.103602 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:12.104716 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:12.105727 | orchestrator | 2025-06-03 15:18:12.108989 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-03 15:18:12.109036 | orchestrator | Tuesday 03 June 2025 15:18:12 +0000 (0:00:00.756) 0:06:44.452 ********** 2025-06-03 15:18:13.001596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:18:13.002398 | orchestrator | 2025-06-03 15:18:13.003429 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-03 15:18:13.004503 | orchestrator | Tuesday 03 June 2025 15:18:12 +0000 (0:00:00.894) 0:06:45.347 ********** 2025-06-03 15:18:13.862339 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:13.863374 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:13.866174 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:13.866405 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:13.867288 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:13.868789 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:13.870075 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:13.871050 | orchestrator | 2025-06-03 15:18:13.871801 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-03 15:18:13.872062 | orchestrator | Tuesday 03 June 2025 15:18:13 +0000 (0:00:00.861) 0:06:46.209 ********** 2025-06-03 15:18:16.481787 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-03 15:18:16.483369 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-03 15:18:16.487040 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-03 15:18:16.488221 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-03 15:18:16.489651 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-03 15:18:16.490209 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-03 15:18:16.491341 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-03 15:18:16.492174 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-03 15:18:16.492684 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-03 15:18:16.493361 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-03 15:18:16.494099 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-03 15:18:16.494701 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-03 15:18:16.495741 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-03 15:18:16.496179 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-03 15:18:16.496964 | orchestrator | 2025-06-03 15:18:16.497826 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-03 15:18:16.498441 | orchestrator | Tuesday 03 June 2025 15:18:16 +0000 (0:00:02.618) 0:06:48.827 ********** 2025-06-03 15:18:16.607117 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:16.664069 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:16.726890 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:16.788589 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:16.860150 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:16.969599 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:16.969785 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:16.970790 | orchestrator | 2025-06-03 15:18:16.971275 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-03 15:18:16.972459 | orchestrator | Tuesday 03 June 2025 15:18:16 +0000 (0:00:00.489) 0:06:49.317 ********** 2025-06-03 15:18:17.758412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:18:17.759701 | orchestrator | 2025-06-03 15:18:17.760072 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-03 15:18:17.761851 | orchestrator | Tuesday 03 June 2025 15:18:17 +0000 (0:00:00.788) 0:06:50.106 ********** 2025-06-03 15:18:18.258380 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:18.318392 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:18.746220 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:18.746504 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:18.747822 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:18.751917 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:18.752011 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:18.753797 | orchestrator | 2025-06-03 15:18:18.754421 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-03 15:18:18.755333 | orchestrator | Tuesday 03 June 2025 15:18:18 +0000 (0:00:00.986) 0:06:51.092 ********** 2025-06-03 15:18:19.132285 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:19.629092 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:19.629282 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:19.629999 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:19.630070 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:19.630213 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:19.630441 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:19.630650 | orchestrator | 2025-06-03 15:18:19.632386 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-03 15:18:19.632542 | orchestrator | Tuesday 03 June 2025 15:18:19 +0000 (0:00:00.881) 0:06:51.974 ********** 2025-06-03 15:18:19.771831 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:19.838613 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:19.903919 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:19.977288 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:20.047790 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:20.150788 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:20.151348 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:20.152630 | orchestrator | 2025-06-03 15:18:20.153457 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-03 15:18:20.154186 | orchestrator | Tuesday 03 June 2025 15:18:20 +0000 (0:00:00.523) 0:06:52.497 ********** 2025-06-03 15:18:21.546121 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:21.546598 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:21.547852 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:21.549367 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:21.549869 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:21.550698 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:21.551783 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:21.551805 | orchestrator | 2025-06-03 15:18:21.552664 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-03 15:18:21.553188 | orchestrator | Tuesday 03 June 2025 15:18:21 +0000 (0:00:01.396) 0:06:53.894 ********** 2025-06-03 15:18:21.678738 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:21.753580 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:21.819835 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:21.886554 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:21.968794 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:22.071335 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:22.071428 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:22.071884 | orchestrator | 2025-06-03 15:18:22.072259 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-03 15:18:22.073067 | orchestrator | Tuesday 03 June 2025 15:18:22 +0000 (0:00:00.524) 0:06:54.418 ********** 2025-06-03 15:18:29.580743 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:29.581674 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:29.582334 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:29.582966 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:29.583781 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:29.585519 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:29.586778 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:29.587410 | orchestrator | 2025-06-03 15:18:29.587645 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-03 15:18:29.588618 | orchestrator | Tuesday 03 June 2025 15:18:29 +0000 (0:00:07.508) 0:07:01.927 ********** 2025-06-03 15:18:30.985725 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:30.986614 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:30.987615 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:30.987894 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:30.988816 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:30.989438 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:30.990133 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:30.991383 | orchestrator | 2025-06-03 15:18:30.991423 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-03 15:18:30.991867 | orchestrator | Tuesday 03 June 2025 15:18:30 +0000 (0:00:01.405) 0:07:03.332 ********** 2025-06-03 15:18:32.731831 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:32.732786 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:32.733866 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:32.735893 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:32.736569 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:32.737384 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:32.737903 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:32.738838 | orchestrator | 2025-06-03 15:18:32.739411 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-03 15:18:32.740338 | orchestrator | Tuesday 03 June 2025 15:18:32 +0000 (0:00:01.744) 0:07:05.077 ********** 2025-06-03 15:18:34.552291 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:34.552473 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:34.554187 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:34.555036 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:34.556503 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:34.557460 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:34.558288 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:34.559357 | orchestrator | 2025-06-03 15:18:34.560011 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-03 15:18:34.561499 | orchestrator | Tuesday 03 June 2025 15:18:34 +0000 (0:00:01.821) 0:07:06.898 ********** 2025-06-03 15:18:35.007387 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:35.435220 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:35.435526 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:35.436689 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:35.437667 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:35.438916 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:35.439842 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:35.440293 | orchestrator | 2025-06-03 15:18:35.441539 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-03 15:18:35.443327 | orchestrator | Tuesday 03 June 2025 15:18:35 +0000 (0:00:00.884) 0:07:07.782 ********** 2025-06-03 15:18:35.596960 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:35.667505 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:35.745451 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:35.839693 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:35.907436 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:36.352022 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:36.353681 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:36.356845 | orchestrator | 2025-06-03 15:18:36.356884 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-03 15:18:36.356907 | orchestrator | Tuesday 03 June 2025 15:18:36 +0000 (0:00:00.916) 0:07:08.699 ********** 2025-06-03 15:18:36.490463 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:36.567329 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:36.634581 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:36.700670 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:36.776436 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:36.928932 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:36.929858 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:36.933859 | orchestrator | 2025-06-03 15:18:36.933881 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-03 15:18:36.933891 | orchestrator | Tuesday 03 June 2025 15:18:36 +0000 (0:00:00.575) 0:07:09.275 ********** 2025-06-03 15:18:37.107727 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:37.176413 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:37.273565 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:37.366916 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:37.627479 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:37.741171 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:37.742833 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:37.745833 | orchestrator | 2025-06-03 15:18:37.745857 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-03 15:18:37.746262 | orchestrator | Tuesday 03 June 2025 15:18:37 +0000 (0:00:00.812) 0:07:10.088 ********** 2025-06-03 15:18:37.888927 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:37.954968 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:38.034469 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:38.122290 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:38.227834 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:38.349095 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:38.349312 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:38.349865 | orchestrator | 2025-06-03 15:18:38.350327 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-03 15:18:38.350871 | orchestrator | Tuesday 03 June 2025 15:18:38 +0000 (0:00:00.609) 0:07:10.697 ********** 2025-06-03 15:18:38.514166 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:38.580416 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:38.658463 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:38.786919 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:38.902715 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:38.905619 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:38.905657 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:38.905669 | orchestrator | 2025-06-03 15:18:38.906895 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-03 15:18:38.908036 | orchestrator | Tuesday 03 June 2025 15:18:38 +0000 (0:00:00.551) 0:07:11.249 ********** 2025-06-03 15:18:44.698559 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:44.700104 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:44.701598 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:44.702305 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:44.704359 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:44.706117 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:44.706534 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:44.707945 | orchestrator | 2025-06-03 15:18:44.708948 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-03 15:18:44.709908 | orchestrator | Tuesday 03 June 2025 15:18:44 +0000 (0:00:05.796) 0:07:17.045 ********** 2025-06-03 15:18:44.851680 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:44.929403 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:45.005987 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:45.084474 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:45.156827 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:45.278685 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:45.279396 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:45.282151 | orchestrator | 2025-06-03 15:18:45.283316 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-03 15:18:45.284828 | orchestrator | Tuesday 03 June 2025 15:18:45 +0000 (0:00:00.579) 0:07:17.625 ********** 2025-06-03 15:18:46.413573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:18:46.413892 | orchestrator | 2025-06-03 15:18:46.415481 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-03 15:18:46.416473 | orchestrator | Tuesday 03 June 2025 15:18:46 +0000 (0:00:01.135) 0:07:18.761 ********** 2025-06-03 15:18:48.213857 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:48.213947 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:48.213954 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:48.214000 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:48.214457 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:48.215536 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:48.215657 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:48.216455 | orchestrator | 2025-06-03 15:18:48.218834 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-03 15:18:48.219300 | orchestrator | Tuesday 03 June 2025 15:18:48 +0000 (0:00:01.797) 0:07:20.559 ********** 2025-06-03 15:18:49.418798 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:49.419608 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:49.421786 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:49.422132 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:49.423053 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:49.424128 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:49.424675 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:49.425412 | orchestrator | 2025-06-03 15:18:49.426272 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-03 15:18:49.427516 | orchestrator | Tuesday 03 June 2025 15:18:49 +0000 (0:00:01.207) 0:07:21.766 ********** 2025-06-03 15:18:50.056112 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:50.475304 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:50.475558 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:50.477442 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:50.478100 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:50.479000 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:50.479396 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:50.480311 | orchestrator | 2025-06-03 15:18:50.480718 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-03 15:18:50.481387 | orchestrator | Tuesday 03 June 2025 15:18:50 +0000 (0:00:01.054) 0:07:22.821 ********** 2025-06-03 15:18:52.216651 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:18:52.216760 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:18:52.217682 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:18:52.218134 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:18:52.218569 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:18:52.219444 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:18:52.220024 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:18:52.220246 | orchestrator | 2025-06-03 15:18:52.220655 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-03 15:18:52.221368 | orchestrator | Tuesday 03 June 2025 15:18:52 +0000 (0:00:01.742) 0:07:24.563 ********** 2025-06-03 15:18:53.055906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:18:53.056337 | orchestrator | 2025-06-03 15:18:53.057025 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-03 15:18:53.058099 | orchestrator | Tuesday 03 June 2025 15:18:53 +0000 (0:00:00.838) 0:07:25.402 ********** 2025-06-03 15:19:01.844463 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:01.845586 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:01.846678 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:01.847684 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:01.850330 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:01.851898 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:01.852858 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:01.853988 | orchestrator | 2025-06-03 15:19:01.854881 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-03 15:19:01.855680 | orchestrator | Tuesday 03 June 2025 15:19:01 +0000 (0:00:08.788) 0:07:34.190 ********** 2025-06-03 15:19:03.849572 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:03.850754 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:03.851698 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:03.853564 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:03.854429 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:03.855326 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:03.856595 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:03.857237 | orchestrator | 2025-06-03 15:19:03.857819 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-03 15:19:03.858992 | orchestrator | Tuesday 03 June 2025 15:19:03 +0000 (0:00:02.006) 0:07:36.197 ********** 2025-06-03 15:19:05.161785 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:05.162390 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:05.163134 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:05.165107 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:05.165385 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:05.166267 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:05.167213 | orchestrator | 2025-06-03 15:19:05.167348 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-03 15:19:05.167958 | orchestrator | Tuesday 03 June 2025 15:19:05 +0000 (0:00:01.310) 0:07:37.508 ********** 2025-06-03 15:19:06.754984 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:06.755258 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:06.755967 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:06.757743 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:06.759323 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:06.760644 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:06.762668 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:06.762708 | orchestrator | 2025-06-03 15:19:06.764025 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-03 15:19:06.765237 | orchestrator | 2025-06-03 15:19:06.766425 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-03 15:19:06.766812 | orchestrator | Tuesday 03 June 2025 15:19:06 +0000 (0:00:01.595) 0:07:39.103 ********** 2025-06-03 15:19:06.893176 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:06.961720 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:07.035482 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:07.123415 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:07.202618 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:07.360265 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:07.360954 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:07.361760 | orchestrator | 2025-06-03 15:19:07.362923 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-03 15:19:07.363023 | orchestrator | 2025-06-03 15:19:07.364816 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-03 15:19:07.368512 | orchestrator | Tuesday 03 June 2025 15:19:07 +0000 (0:00:00.601) 0:07:39.705 ********** 2025-06-03 15:19:08.773694 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:08.774317 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:08.774395 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:08.776036 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:08.776062 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:08.776952 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:08.777611 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:08.777909 | orchestrator | 2025-06-03 15:19:08.778434 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-03 15:19:08.779079 | orchestrator | Tuesday 03 June 2025 15:19:08 +0000 (0:00:01.414) 0:07:41.120 ********** 2025-06-03 15:19:10.470901 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:10.471781 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:10.472408 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:10.475805 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:10.475896 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:10.475912 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:10.476365 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:10.476641 | orchestrator | 2025-06-03 15:19:10.477024 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-03 15:19:10.477653 | orchestrator | Tuesday 03 June 2025 15:19:10 +0000 (0:00:01.697) 0:07:42.817 ********** 2025-06-03 15:19:10.596648 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:10.667589 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:10.773895 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:10.848174 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:10.937465 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:11.367990 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:11.368768 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:11.369649 | orchestrator | 2025-06-03 15:19:11.373003 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-03 15:19:11.374066 | orchestrator | Tuesday 03 June 2025 15:19:11 +0000 (0:00:00.896) 0:07:43.714 ********** 2025-06-03 15:19:12.665616 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:12.666166 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:12.667008 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:12.668555 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:12.668579 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:12.668783 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:12.669905 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:12.671107 | orchestrator | 2025-06-03 15:19:12.671710 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-03 15:19:12.672231 | orchestrator | 2025-06-03 15:19:12.672585 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-03 15:19:12.672937 | orchestrator | Tuesday 03 June 2025 15:19:12 +0000 (0:00:01.296) 0:07:45.011 ********** 2025-06-03 15:19:13.713099 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:19:13.713896 | orchestrator | 2025-06-03 15:19:13.715694 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-03 15:19:13.716149 | orchestrator | Tuesday 03 June 2025 15:19:13 +0000 (0:00:01.047) 0:07:46.058 ********** 2025-06-03 15:19:14.129768 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:14.604397 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:14.604807 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:14.606099 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:14.606992 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:14.608105 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:14.608615 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:14.609972 | orchestrator | 2025-06-03 15:19:14.610896 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-03 15:19:14.611436 | orchestrator | Tuesday 03 June 2025 15:19:14 +0000 (0:00:00.894) 0:07:46.953 ********** 2025-06-03 15:19:15.779601 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:15.780150 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:15.783986 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:15.784023 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:15.784035 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:15.784046 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:15.784057 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:15.785296 | orchestrator | 2025-06-03 15:19:15.787886 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-03 15:19:15.788854 | orchestrator | Tuesday 03 June 2025 15:19:15 +0000 (0:00:01.172) 0:07:48.125 ********** 2025-06-03 15:19:16.818565 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:19:16.819152 | orchestrator | 2025-06-03 15:19:16.821342 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-03 15:19:16.822106 | orchestrator | Tuesday 03 June 2025 15:19:16 +0000 (0:00:01.039) 0:07:49.165 ********** 2025-06-03 15:19:17.231141 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:17.653817 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:17.654147 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:17.655160 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:17.656352 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:17.657545 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:17.658284 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:17.660019 | orchestrator | 2025-06-03 15:19:17.660954 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-03 15:19:17.661676 | orchestrator | Tuesday 03 June 2025 15:19:17 +0000 (0:00:00.835) 0:07:50.000 ********** 2025-06-03 15:19:18.065037 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:18.739927 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:18.740191 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:18.740320 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:18.740810 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:18.741073 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:18.741839 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:18.742235 | orchestrator | 2025-06-03 15:19:18.743273 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:19:18.744107 | orchestrator | 2025-06-03 15:19:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:19:18.744784 | orchestrator | 2025-06-03 15:19:18 | INFO  | Please wait and do not abort execution. 2025-06-03 15:19:18.746077 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-03 15:19:18.747173 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-03 15:19:18.748130 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-03 15:19:18.749328 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-03 15:19:18.750116 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-03 15:19:18.751575 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-03 15:19:18.752060 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-03 15:19:18.752725 | orchestrator | 2025-06-03 15:19:18.753270 | orchestrator | 2025-06-03 15:19:18.754594 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:19:18.755920 | orchestrator | Tuesday 03 June 2025 15:19:18 +0000 (0:00:01.087) 0:07:51.088 ********** 2025-06-03 15:19:18.758316 | orchestrator | =============================================================================== 2025-06-03 15:19:18.759865 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.99s 2025-06-03 15:19:18.761515 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.69s 2025-06-03 15:19:18.762873 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.55s 2025-06-03 15:19:18.766359 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.75s 2025-06-03 15:19:18.767607 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.15s 2025-06-03 15:19:18.769107 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.25s 2025-06-03 15:19:18.769751 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.65s 2025-06-03 15:19:18.770597 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.45s 2025-06-03 15:19:18.771511 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.85s 2025-06-03 15:19:18.772244 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.79s 2025-06-03 15:19:18.773311 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.46s 2025-06-03 15:19:18.773937 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.29s 2025-06-03 15:19:18.775094 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.29s 2025-06-03 15:19:18.775996 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.57s 2025-06-03 15:19:18.776632 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.51s 2025-06-03 15:19:18.777788 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.40s 2025-06-03 15:19:18.778726 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.92s 2025-06-03 15:19:18.779431 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.80s 2025-06-03 15:19:18.780298 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.78s 2025-06-03 15:19:18.780849 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.76s 2025-06-03 15:19:19.552573 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-03 15:19:19.552672 | orchestrator | + osism apply network 2025-06-03 15:19:21.983691 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:19:21.983794 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:19:21.983808 | orchestrator | Registering Redlock._release_script 2025-06-03 15:19:22.050737 | orchestrator | 2025-06-03 15:19:22 | INFO  | Task 50c7b3b5-6d2b-4244-a511-ffab11872cc9 (network) was prepared for execution. 2025-06-03 15:19:22.050880 | orchestrator | 2025-06-03 15:19:22 | INFO  | It takes a moment until task 50c7b3b5-6d2b-4244-a511-ffab11872cc9 (network) has been started and output is visible here. 2025-06-03 15:19:26.400895 | orchestrator | 2025-06-03 15:19:26.401011 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-03 15:19:26.401029 | orchestrator | 2025-06-03 15:19:26.401042 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-03 15:19:26.401165 | orchestrator | Tuesday 03 June 2025 15:19:26 +0000 (0:00:00.275) 0:00:00.275 ********** 2025-06-03 15:19:26.548330 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:26.627792 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:26.702929 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:26.779997 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:26.972031 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:27.097443 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:27.098480 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:27.102133 | orchestrator | 2025-06-03 15:19:27.102181 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-03 15:19:27.102274 | orchestrator | Tuesday 03 June 2025 15:19:27 +0000 (0:00:00.700) 0:00:00.975 ********** 2025-06-03 15:19:28.266717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:19:28.270317 | orchestrator | 2025-06-03 15:19:28.270364 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-03 15:19:28.270379 | orchestrator | Tuesday 03 June 2025 15:19:28 +0000 (0:00:01.168) 0:00:02.143 ********** 2025-06-03 15:19:30.201612 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:30.202146 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:30.203248 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:30.204897 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:30.204922 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:30.205370 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:30.206535 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:30.208115 | orchestrator | 2025-06-03 15:19:30.209161 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-03 15:19:30.210202 | orchestrator | Tuesday 03 June 2025 15:19:30 +0000 (0:00:01.937) 0:00:04.081 ********** 2025-06-03 15:19:31.971187 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:31.971346 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:31.975190 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:31.975256 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:31.975269 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:31.975281 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:31.975292 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:31.975303 | orchestrator | 2025-06-03 15:19:31.975703 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-03 15:19:31.978347 | orchestrator | Tuesday 03 June 2025 15:19:31 +0000 (0:00:01.765) 0:00:05.846 ********** 2025-06-03 15:19:32.506129 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-03 15:19:32.506974 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-03 15:19:32.966987 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-03 15:19:32.967315 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-03 15:19:32.967827 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-03 15:19:32.968259 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-03 15:19:32.969506 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-03 15:19:32.969905 | orchestrator | 2025-06-03 15:19:32.971038 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-03 15:19:32.971696 | orchestrator | Tuesday 03 June 2025 15:19:32 +0000 (0:00:01.001) 0:00:06.848 ********** 2025-06-03 15:19:36.382791 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-03 15:19:36.383095 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-03 15:19:36.383526 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:19:36.384707 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-03 15:19:36.385719 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:19:36.387398 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:19:36.387789 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-03 15:19:36.388481 | orchestrator | 2025-06-03 15:19:36.388782 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-03 15:19:36.389401 | orchestrator | Tuesday 03 June 2025 15:19:36 +0000 (0:00:03.408) 0:00:10.257 ********** 2025-06-03 15:19:37.867906 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:37.872344 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:37.872438 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:37.872805 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:37.873957 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:37.875064 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:37.875847 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:37.876959 | orchestrator | 2025-06-03 15:19:37.877650 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-03 15:19:37.878541 | orchestrator | Tuesday 03 June 2025 15:19:37 +0000 (0:00:01.489) 0:00:11.747 ********** 2025-06-03 15:19:40.049501 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:19:40.052947 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-03 15:19:40.054324 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:19:40.055005 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:19:40.056016 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-03 15:19:40.058850 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-03 15:19:40.059410 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-03 15:19:40.060729 | orchestrator | 2025-06-03 15:19:40.061051 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-03 15:19:40.062578 | orchestrator | Tuesday 03 June 2025 15:19:40 +0000 (0:00:02.180) 0:00:13.928 ********** 2025-06-03 15:19:40.478631 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:40.569384 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:41.188276 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:41.188512 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:41.189504 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:41.190354 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:41.191107 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:41.191831 | orchestrator | 2025-06-03 15:19:41.192124 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-03 15:19:41.192868 | orchestrator | Tuesday 03 June 2025 15:19:41 +0000 (0:00:01.137) 0:00:15.065 ********** 2025-06-03 15:19:41.370794 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:41.455872 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:41.549347 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:41.663571 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:41.762169 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:41.915327 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:41.915555 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:41.916695 | orchestrator | 2025-06-03 15:19:41.917693 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-03 15:19:41.918124 | orchestrator | Tuesday 03 June 2025 15:19:41 +0000 (0:00:00.726) 0:00:15.792 ********** 2025-06-03 15:19:44.122325 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:44.123386 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:44.126399 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:44.126456 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:44.126468 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:44.127631 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:44.127763 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:44.128134 | orchestrator | 2025-06-03 15:19:44.128898 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-03 15:19:44.129303 | orchestrator | Tuesday 03 June 2025 15:19:44 +0000 (0:00:02.205) 0:00:17.998 ********** 2025-06-03 15:19:44.389491 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:44.478162 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:44.562326 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:44.648455 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:45.071318 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:45.071573 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:45.072429 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-03 15:19:45.072478 | orchestrator | 2025-06-03 15:19:45.072501 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-03 15:19:45.072536 | orchestrator | Tuesday 03 June 2025 15:19:45 +0000 (0:00:00.949) 0:00:18.948 ********** 2025-06-03 15:19:47.398766 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:47.399811 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:47.402930 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:47.402965 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:47.402976 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:47.405527 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:47.406543 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:47.406844 | orchestrator | 2025-06-03 15:19:47.407794 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-03 15:19:47.409322 | orchestrator | Tuesday 03 June 2025 15:19:47 +0000 (0:00:02.325) 0:00:21.273 ********** 2025-06-03 15:19:48.702620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:19:48.702824 | orchestrator | 2025-06-03 15:19:48.705493 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-03 15:19:48.706325 | orchestrator | Tuesday 03 June 2025 15:19:48 +0000 (0:00:01.305) 0:00:22.579 ********** 2025-06-03 15:19:49.716733 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:49.716847 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:49.720382 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:49.724091 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:49.726367 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:49.726883 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:49.728499 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:49.729919 | orchestrator | 2025-06-03 15:19:49.730229 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-03 15:19:49.731157 | orchestrator | Tuesday 03 June 2025 15:19:49 +0000 (0:00:01.015) 0:00:23.595 ********** 2025-06-03 15:19:50.113174 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:50.208357 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:50.320190 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:50.403627 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:50.495190 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:50.624895 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:50.626597 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:50.627312 | orchestrator | 2025-06-03 15:19:50.628087 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-03 15:19:50.628903 | orchestrator | Tuesday 03 June 2025 15:19:50 +0000 (0:00:00.908) 0:00:24.503 ********** 2025-06-03 15:19:51.072521 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:19:51.072686 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:19:51.181467 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:19:51.181630 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:19:51.181906 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:19:51.822958 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:19:51.823420 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:19:51.827367 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:19:51.828089 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:19:51.829936 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:19:51.830962 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:19:51.832189 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:19:51.833551 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:19:51.834514 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:19:51.834806 | orchestrator | 2025-06-03 15:19:51.835554 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-03 15:19:51.836770 | orchestrator | Tuesday 03 June 2025 15:19:51 +0000 (0:00:01.195) 0:00:25.699 ********** 2025-06-03 15:19:52.011512 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:52.112194 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:52.197248 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:52.286894 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:52.364441 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:52.504157 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:52.504772 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:52.507980 | orchestrator | 2025-06-03 15:19:52.508011 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-03 15:19:52.508024 | orchestrator | Tuesday 03 June 2025 15:19:52 +0000 (0:00:00.684) 0:00:26.383 ********** 2025-06-03 15:19:56.129943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2025-06-03 15:19:56.133872 | orchestrator | 2025-06-03 15:19:56.135145 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-03 15:19:56.136326 | orchestrator | Tuesday 03 June 2025 15:19:56 +0000 (0:00:03.619) 0:00:30.003 ********** 2025-06-03 15:20:01.292716 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:01.294071 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:01.299092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:01.300102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:01.300858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:01.302345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:01.302890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:01.303885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:01.305406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:01.306414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:01.307300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:01.307833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:01.310291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:01.314209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:01.314318 | orchestrator | 2025-06-03 15:20:01.315313 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-03 15:20:01.316073 | orchestrator | Tuesday 03 June 2025 15:20:01 +0000 (0:00:05.165) 0:00:35.169 ********** 2025-06-03 15:20:05.985079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:05.986218 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:05.990670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:05.991539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:05.991639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:05.992501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:05.993239 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:05.993930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:05.994557 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:05.995102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:05.995794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:05.996299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:05.997342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:06.000361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:06.001474 | orchestrator | 2025-06-03 15:20:06.002232 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-03 15:20:06.003287 | orchestrator | Tuesday 03 June 2025 15:20:05 +0000 (0:00:04.695) 0:00:39.864 ********** 2025-06-03 15:20:07.309058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:20:07.309294 | orchestrator | 2025-06-03 15:20:07.309807 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-03 15:20:07.313074 | orchestrator | Tuesday 03 June 2025 15:20:07 +0000 (0:00:01.320) 0:00:41.185 ********** 2025-06-03 15:20:07.821698 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:08.108971 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:08.553708 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:08.553823 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:08.553918 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:08.554496 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:08.554918 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:08.554951 | orchestrator | 2025-06-03 15:20:08.554965 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-03 15:20:08.555357 | orchestrator | Tuesday 03 June 2025 15:20:08 +0000 (0:00:01.251) 0:00:42.436 ********** 2025-06-03 15:20:08.642455 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:08.642517 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:08.642908 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:08.643487 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:08.745427 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:20:08.745750 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:08.746568 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:08.747600 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:08.748165 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:08.849419 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:08.850134 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:08.851329 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:08.852367 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:08.853358 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:08.945370 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:08.946139 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:08.946746 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:08.947262 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:08.947998 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:09.042425 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:09.043826 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:09.044581 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:09.046095 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:09.047732 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:09.137533 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:09.137635 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:09.137734 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:09.138080 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:09.138378 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:10.570278 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:10.570887 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:10.571663 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:10.572577 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:10.573131 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:10.573866 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:10.575018 | orchestrator | 2025-06-03 15:20:10.575457 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-03 15:20:10.576165 | orchestrator | Tuesday 03 June 2025 15:20:10 +0000 (0:00:02.009) 0:00:44.446 ********** 2025-06-03 15:20:10.757268 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:20:10.838611 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:10.921538 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:11.003839 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:11.094596 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:11.224036 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:11.224557 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:11.225745 | orchestrator | 2025-06-03 15:20:11.229075 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-03 15:20:11.229176 | orchestrator | Tuesday 03 June 2025 15:20:11 +0000 (0:00:00.658) 0:00:45.105 ********** 2025-06-03 15:20:11.380139 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:20:11.458320 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:11.713128 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:11.803731 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:11.887761 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:11.918585 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:11.919476 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:11.920839 | orchestrator | 2025-06-03 15:20:11.922278 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:20:11.922298 | orchestrator | 2025-06-03 15:20:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:20:11.922304 | orchestrator | 2025-06-03 15:20:11 | INFO  | Please wait and do not abort execution. 2025-06-03 15:20:11.923082 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:20:11.923566 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:11.924435 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:11.925263 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:11.926076 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:11.926672 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:11.927116 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:11.927900 | orchestrator | 2025-06-03 15:20:11.928034 | orchestrator | 2025-06-03 15:20:11.928750 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:20:11.929318 | orchestrator | Tuesday 03 June 2025 15:20:11 +0000 (0:00:00.695) 0:00:45.800 ********** 2025-06-03 15:20:11.932315 | orchestrator | =============================================================================== 2025-06-03 15:20:11.932368 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.17s 2025-06-03 15:20:11.933840 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.70s 2025-06-03 15:20:11.934744 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.62s 2025-06-03 15:20:11.934913 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.41s 2025-06-03 15:20:11.935742 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 2.33s 2025-06-03 15:20:11.935898 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.21s 2025-06-03 15:20:11.936670 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.18s 2025-06-03 15:20:11.937054 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.01s 2025-06-03 15:20:11.937630 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.94s 2025-06-03 15:20:11.938083 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.77s 2025-06-03 15:20:11.939069 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.49s 2025-06-03 15:20:11.939503 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.32s 2025-06-03 15:20:11.940073 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2025-06-03 15:20:11.940638 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.25s 2025-06-03 15:20:11.941015 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.20s 2025-06-03 15:20:11.941509 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.17s 2025-06-03 15:20:11.942299 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2025-06-03 15:20:11.942896 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.02s 2025-06-03 15:20:11.943006 | orchestrator | osism.commons.network : Create required directories --------------------- 1.00s 2025-06-03 15:20:11.943378 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2025-06-03 15:20:12.558731 | orchestrator | + osism apply wireguard 2025-06-03 15:20:14.236599 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:20:14.236687 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:20:14.236698 | orchestrator | Registering Redlock._release_script 2025-06-03 15:20:14.295834 | orchestrator | 2025-06-03 15:20:14 | INFO  | Task ce866782-65cd-4881-8938-b4759ff438b3 (wireguard) was prepared for execution. 2025-06-03 15:20:14.295915 | orchestrator | 2025-06-03 15:20:14 | INFO  | It takes a moment until task ce866782-65cd-4881-8938-b4759ff438b3 (wireguard) has been started and output is visible here. 2025-06-03 15:20:18.269986 | orchestrator | 2025-06-03 15:20:18.270322 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-03 15:20:18.270925 | orchestrator | 2025-06-03 15:20:18.271396 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-03 15:20:18.272982 | orchestrator | Tuesday 03 June 2025 15:20:18 +0000 (0:00:00.232) 0:00:00.232 ********** 2025-06-03 15:20:19.782595 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:19.783060 | orchestrator | 2025-06-03 15:20:19.784644 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-03 15:20:19.784682 | orchestrator | Tuesday 03 June 2025 15:20:19 +0000 (0:00:01.513) 0:00:01.746 ********** 2025-06-03 15:20:26.322484 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:26.323892 | orchestrator | 2025-06-03 15:20:26.324709 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-03 15:20:26.326012 | orchestrator | Tuesday 03 June 2025 15:20:26 +0000 (0:00:06.537) 0:00:08.283 ********** 2025-06-03 15:20:26.892811 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:26.893610 | orchestrator | 2025-06-03 15:20:26.896506 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-03 15:20:26.896589 | orchestrator | Tuesday 03 June 2025 15:20:26 +0000 (0:00:00.573) 0:00:08.857 ********** 2025-06-03 15:20:27.325993 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:27.326426 | orchestrator | 2025-06-03 15:20:27.326724 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-03 15:20:27.327076 | orchestrator | Tuesday 03 June 2025 15:20:27 +0000 (0:00:00.432) 0:00:09.289 ********** 2025-06-03 15:20:27.850416 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:27.850834 | orchestrator | 2025-06-03 15:20:27.852393 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-03 15:20:27.854272 | orchestrator | Tuesday 03 June 2025 15:20:27 +0000 (0:00:00.524) 0:00:09.813 ********** 2025-06-03 15:20:28.435376 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:28.435818 | orchestrator | 2025-06-03 15:20:28.437229 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-03 15:20:28.438278 | orchestrator | Tuesday 03 June 2025 15:20:28 +0000 (0:00:00.584) 0:00:10.398 ********** 2025-06-03 15:20:28.828871 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:28.829671 | orchestrator | 2025-06-03 15:20:28.830432 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-03 15:20:28.831159 | orchestrator | Tuesday 03 June 2025 15:20:28 +0000 (0:00:00.395) 0:00:10.793 ********** 2025-06-03 15:20:30.075282 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:30.075795 | orchestrator | 2025-06-03 15:20:30.077109 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-03 15:20:30.077138 | orchestrator | Tuesday 03 June 2025 15:20:30 +0000 (0:00:01.245) 0:00:12.038 ********** 2025-06-03 15:20:30.988974 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:20:30.991867 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:30.992990 | orchestrator | 2025-06-03 15:20:30.994479 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-03 15:20:30.995216 | orchestrator | Tuesday 03 June 2025 15:20:30 +0000 (0:00:00.912) 0:00:12.951 ********** 2025-06-03 15:20:32.806371 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:32.807217 | orchestrator | 2025-06-03 15:20:32.808733 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-03 15:20:32.811488 | orchestrator | Tuesday 03 June 2025 15:20:32 +0000 (0:00:01.816) 0:00:14.767 ********** 2025-06-03 15:20:33.799166 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:33.799413 | orchestrator | 2025-06-03 15:20:33.799727 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:20:33.800452 | orchestrator | 2025-06-03 15:20:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:20:33.800478 | orchestrator | 2025-06-03 15:20:33 | INFO  | Please wait and do not abort execution. 2025-06-03 15:20:33.801000 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:20:33.801804 | orchestrator | 2025-06-03 15:20:33.802611 | orchestrator | 2025-06-03 15:20:33.803935 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:20:33.804333 | orchestrator | Tuesday 03 June 2025 15:20:33 +0000 (0:00:00.996) 0:00:15.763 ********** 2025-06-03 15:20:33.805561 | orchestrator | =============================================================================== 2025-06-03 15:20:33.806101 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.54s 2025-06-03 15:20:33.808251 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.82s 2025-06-03 15:20:33.808385 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.51s 2025-06-03 15:20:33.809218 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.25s 2025-06-03 15:20:33.809840 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.00s 2025-06-03 15:20:33.810360 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2025-06-03 15:20:33.810801 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.58s 2025-06-03 15:20:33.811443 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-06-03 15:20:33.812079 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-06-03 15:20:33.812701 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-06-03 15:20:33.813256 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-06-03 15:20:34.467612 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-03 15:20:34.503590 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-03 15:20:34.503721 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-03 15:20:34.585542 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 171 0 --:--:-- --:--:-- --:--:-- 172 2025-06-03 15:20:34.602144 | orchestrator | + osism apply --environment custom workarounds 2025-06-03 15:20:36.335372 | orchestrator | 2025-06-03 15:20:36 | INFO  | Trying to run play workarounds in environment custom 2025-06-03 15:20:36.340167 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:20:36.340252 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:20:36.340265 | orchestrator | Registering Redlock._release_script 2025-06-03 15:20:36.399648 | orchestrator | 2025-06-03 15:20:36 | INFO  | Task bd0f9795-4919-4af8-ab9a-0be505a30d23 (workarounds) was prepared for execution. 2025-06-03 15:20:36.399736 | orchestrator | 2025-06-03 15:20:36 | INFO  | It takes a moment until task bd0f9795-4919-4af8-ab9a-0be505a30d23 (workarounds) has been started and output is visible here. 2025-06-03 15:20:40.440480 | orchestrator | 2025-06-03 15:20:40.445289 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:20:40.446263 | orchestrator | 2025-06-03 15:20:40.447582 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-03 15:20:40.448440 | orchestrator | Tuesday 03 June 2025 15:20:40 +0000 (0:00:00.142) 0:00:00.142 ********** 2025-06-03 15:20:40.606689 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-03 15:20:40.695581 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-03 15:20:40.779828 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-03 15:20:40.862636 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-03 15:20:41.057110 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-03 15:20:41.222011 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-03 15:20:41.223283 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-03 15:20:41.224550 | orchestrator | 2025-06-03 15:20:41.224576 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-03 15:20:41.225307 | orchestrator | 2025-06-03 15:20:41.226363 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-03 15:20:41.226645 | orchestrator | Tuesday 03 June 2025 15:20:41 +0000 (0:00:00.781) 0:00:00.924 ********** 2025-06-03 15:20:43.796271 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:43.796377 | orchestrator | 2025-06-03 15:20:43.796732 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-03 15:20:43.797355 | orchestrator | 2025-06-03 15:20:43.798125 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-03 15:20:43.798816 | orchestrator | Tuesday 03 June 2025 15:20:43 +0000 (0:00:02.571) 0:00:03.495 ********** 2025-06-03 15:20:45.675969 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:45.679379 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:45.679420 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:45.680077 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:45.682177 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:45.685115 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:45.685147 | orchestrator | 2025-06-03 15:20:45.685160 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-03 15:20:45.685173 | orchestrator | 2025-06-03 15:20:45.685688 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-03 15:20:45.686381 | orchestrator | Tuesday 03 June 2025 15:20:45 +0000 (0:00:01.884) 0:00:05.380 ********** 2025-06-03 15:20:47.206531 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:20:47.207390 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:20:47.208795 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:20:47.209061 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:20:47.210320 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:20:47.211613 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:20:47.212408 | orchestrator | 2025-06-03 15:20:47.213416 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-03 15:20:47.215899 | orchestrator | Tuesday 03 June 2025 15:20:47 +0000 (0:00:01.527) 0:00:06.907 ********** 2025-06-03 15:20:51.271642 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:20:51.271761 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:20:51.271821 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:20:51.271842 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:20:51.271853 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:20:51.273576 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:20:51.274763 | orchestrator | 2025-06-03 15:20:51.275347 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-03 15:20:51.275459 | orchestrator | Tuesday 03 June 2025 15:20:51 +0000 (0:00:04.068) 0:00:10.976 ********** 2025-06-03 15:20:51.431010 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:51.512347 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:51.590277 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:51.668304 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:52.001544 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:52.002131 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:52.002167 | orchestrator | 2025-06-03 15:20:52.002255 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-03 15:20:52.004297 | orchestrator | 2025-06-03 15:20:52.004549 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-03 15:20:52.005748 | orchestrator | Tuesday 03 June 2025 15:20:51 +0000 (0:00:00.729) 0:00:11.705 ********** 2025-06-03 15:20:53.719443 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:53.721874 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:20:53.723798 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:20:53.724784 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:20:53.726677 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:20:53.727322 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:20:53.728491 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:20:53.728673 | orchestrator | 2025-06-03 15:20:53.729705 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-03 15:20:53.729820 | orchestrator | Tuesday 03 June 2025 15:20:53 +0000 (0:00:01.716) 0:00:13.422 ********** 2025-06-03 15:20:55.428773 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:55.429566 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:20:55.430474 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:20:55.431854 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:20:55.433165 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:20:55.434671 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:20:55.436268 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:20:55.436777 | orchestrator | 2025-06-03 15:20:55.437348 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-03 15:20:55.437939 | orchestrator | Tuesday 03 June 2025 15:20:55 +0000 (0:00:01.707) 0:00:15.130 ********** 2025-06-03 15:20:57.040179 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:57.040629 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:57.042119 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:57.047047 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:57.047089 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:57.047096 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:57.047126 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:57.047133 | orchestrator | 2025-06-03 15:20:57.047141 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-03 15:20:57.047366 | orchestrator | Tuesday 03 June 2025 15:20:57 +0000 (0:00:01.614) 0:00:16.744 ********** 2025-06-03 15:20:58.870441 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:58.873443 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:20:58.873506 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:20:58.874599 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:20:58.876025 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:20:58.876798 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:20:58.879028 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:20:58.879903 | orchestrator | 2025-06-03 15:20:58.880752 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-03 15:20:58.881787 | orchestrator | Tuesday 03 June 2025 15:20:58 +0000 (0:00:01.826) 0:00:18.570 ********** 2025-06-03 15:20:59.034141 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:20:59.130291 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:59.209744 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:59.292276 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:59.367950 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:59.498492 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:59.499430 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:59.500335 | orchestrator | 2025-06-03 15:20:59.501120 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-03 15:20:59.501925 | orchestrator | 2025-06-03 15:20:59.502623 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-03 15:20:59.502986 | orchestrator | Tuesday 03 June 2025 15:20:59 +0000 (0:00:00.631) 0:00:19.202 ********** 2025-06-03 15:21:02.252087 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:21:02.252614 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:02.253594 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:21:02.255089 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:21:02.256920 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:21:02.257825 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:21:02.258812 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:21:02.261060 | orchestrator | 2025-06-03 15:21:02.261117 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:21:02.261811 | orchestrator | 2025-06-03 15:21:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:21:02.262630 | orchestrator | 2025-06-03 15:21:02 | INFO  | Please wait and do not abort execution. 2025-06-03 15:21:02.263697 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:21:02.264804 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:02.265658 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:02.266677 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:02.267548 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:02.268316 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:02.269010 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:02.269466 | orchestrator | 2025-06-03 15:21:02.269917 | orchestrator | 2025-06-03 15:21:02.270623 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:21:02.271626 | orchestrator | Tuesday 03 June 2025 15:21:02 +0000 (0:00:02.753) 0:00:21.956 ********** 2025-06-03 15:21:02.271657 | orchestrator | =============================================================================== 2025-06-03 15:21:02.272314 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.07s 2025-06-03 15:21:02.272742 | orchestrator | Install python3-docker -------------------------------------------------- 2.75s 2025-06-03 15:21:02.273285 | orchestrator | Apply netplan configuration --------------------------------------------- 2.57s 2025-06-03 15:21:02.273918 | orchestrator | Apply netplan configuration --------------------------------------------- 1.88s 2025-06-03 15:21:02.274384 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.83s 2025-06-03 15:21:02.275106 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2025-06-03 15:21:02.275347 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.71s 2025-06-03 15:21:02.276345 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.61s 2025-06-03 15:21:02.276814 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2025-06-03 15:21:02.277470 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.78s 2025-06-03 15:21:02.278315 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.73s 2025-06-03 15:21:02.278603 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-06-03 15:21:02.978148 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-03 15:21:04.688891 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:21:04.688981 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:21:04.688994 | orchestrator | Registering Redlock._release_script 2025-06-03 15:21:04.748808 | orchestrator | 2025-06-03 15:21:04 | INFO  | Task e20c9577-e40d-493c-b273-9ee3d81cbfea (reboot) was prepared for execution. 2025-06-03 15:21:04.748894 | orchestrator | 2025-06-03 15:21:04 | INFO  | It takes a moment until task e20c9577-e40d-493c-b273-9ee3d81cbfea (reboot) has been started and output is visible here. 2025-06-03 15:21:08.798299 | orchestrator | 2025-06-03 15:21:08.798464 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:08.798791 | orchestrator | 2025-06-03 15:21:08.799233 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:08.799704 | orchestrator | Tuesday 03 June 2025 15:21:08 +0000 (0:00:00.218) 0:00:00.218 ********** 2025-06-03 15:21:08.901615 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:21:08.901898 | orchestrator | 2025-06-03 15:21:08.904047 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:08.904262 | orchestrator | Tuesday 03 June 2025 15:21:08 +0000 (0:00:00.104) 0:00:00.322 ********** 2025-06-03 15:21:09.849477 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:21:09.849557 | orchestrator | 2025-06-03 15:21:09.850356 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:09.851575 | orchestrator | Tuesday 03 June 2025 15:21:09 +0000 (0:00:00.946) 0:00:01.269 ********** 2025-06-03 15:21:09.953783 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:21:09.953892 | orchestrator | 2025-06-03 15:21:09.954747 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:09.955713 | orchestrator | 2025-06-03 15:21:09.956277 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:09.957434 | orchestrator | Tuesday 03 June 2025 15:21:09 +0000 (0:00:00.101) 0:00:01.370 ********** 2025-06-03 15:21:10.058582 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:21:10.059327 | orchestrator | 2025-06-03 15:21:10.060642 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:10.061571 | orchestrator | Tuesday 03 June 2025 15:21:10 +0000 (0:00:00.109) 0:00:01.479 ********** 2025-06-03 15:21:10.730378 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:21:10.731114 | orchestrator | 2025-06-03 15:21:10.732697 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:10.732929 | orchestrator | Tuesday 03 June 2025 15:21:10 +0000 (0:00:00.671) 0:00:02.151 ********** 2025-06-03 15:21:10.845265 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:21:10.846152 | orchestrator | 2025-06-03 15:21:10.847032 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:10.848872 | orchestrator | 2025-06-03 15:21:10.851390 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:10.852388 | orchestrator | Tuesday 03 June 2025 15:21:10 +0000 (0:00:00.113) 0:00:02.264 ********** 2025-06-03 15:21:11.062519 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:21:11.062625 | orchestrator | 2025-06-03 15:21:11.062889 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:11.063853 | orchestrator | Tuesday 03 June 2025 15:21:11 +0000 (0:00:00.217) 0:00:02.481 ********** 2025-06-03 15:21:11.770705 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:21:11.770818 | orchestrator | 2025-06-03 15:21:11.771215 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:11.772047 | orchestrator | Tuesday 03 June 2025 15:21:11 +0000 (0:00:00.709) 0:00:03.191 ********** 2025-06-03 15:21:11.894398 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:21:11.894498 | orchestrator | 2025-06-03 15:21:11.896998 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:11.897398 | orchestrator | 2025-06-03 15:21:11.898062 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:11.898349 | orchestrator | Tuesday 03 June 2025 15:21:11 +0000 (0:00:00.123) 0:00:03.314 ********** 2025-06-03 15:21:12.021806 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:21:12.022535 | orchestrator | 2025-06-03 15:21:12.023157 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:12.023855 | orchestrator | Tuesday 03 June 2025 15:21:12 +0000 (0:00:00.127) 0:00:03.442 ********** 2025-06-03 15:21:12.716542 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:21:12.717074 | orchestrator | 2025-06-03 15:21:12.717872 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:12.720720 | orchestrator | Tuesday 03 June 2025 15:21:12 +0000 (0:00:00.695) 0:00:04.138 ********** 2025-06-03 15:21:12.830424 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:21:12.830673 | orchestrator | 2025-06-03 15:21:12.831413 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:12.832981 | orchestrator | 2025-06-03 15:21:12.833009 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:12.833168 | orchestrator | Tuesday 03 June 2025 15:21:12 +0000 (0:00:00.110) 0:00:04.249 ********** 2025-06-03 15:21:12.939467 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:21:12.940091 | orchestrator | 2025-06-03 15:21:12.942164 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:12.942612 | orchestrator | Tuesday 03 June 2025 15:21:12 +0000 (0:00:00.109) 0:00:04.359 ********** 2025-06-03 15:21:13.654662 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:21:13.655137 | orchestrator | 2025-06-03 15:21:13.657038 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:13.657084 | orchestrator | Tuesday 03 June 2025 15:21:13 +0000 (0:00:00.716) 0:00:05.075 ********** 2025-06-03 15:21:13.767522 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:21:13.768423 | orchestrator | 2025-06-03 15:21:13.768718 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:13.768747 | orchestrator | 2025-06-03 15:21:13.768992 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:13.769339 | orchestrator | Tuesday 03 June 2025 15:21:13 +0000 (0:00:00.111) 0:00:05.186 ********** 2025-06-03 15:21:13.877530 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:21:13.877767 | orchestrator | 2025-06-03 15:21:13.878781 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:13.880313 | orchestrator | Tuesday 03 June 2025 15:21:13 +0000 (0:00:00.112) 0:00:05.298 ********** 2025-06-03 15:21:14.564353 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:21:14.564631 | orchestrator | 2025-06-03 15:21:14.565736 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:14.566931 | orchestrator | Tuesday 03 June 2025 15:21:14 +0000 (0:00:00.682) 0:00:05.981 ********** 2025-06-03 15:21:14.602169 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:21:14.602573 | orchestrator | 2025-06-03 15:21:14.603799 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:21:14.604045 | orchestrator | 2025-06-03 15:21:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:21:14.604914 | orchestrator | 2025-06-03 15:21:14 | INFO  | Please wait and do not abort execution. 2025-06-03 15:21:14.605633 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:14.606696 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:14.607557 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:14.608478 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:14.608934 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:14.609548 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:14.610935 | orchestrator | 2025-06-03 15:21:14.611303 | orchestrator | 2025-06-03 15:21:14.612121 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:21:14.613054 | orchestrator | Tuesday 03 June 2025 15:21:14 +0000 (0:00:00.041) 0:00:06.023 ********** 2025-06-03 15:21:14.613913 | orchestrator | =============================================================================== 2025-06-03 15:21:14.613964 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.42s 2025-06-03 15:21:14.614692 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2025-06-03 15:21:14.614984 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.60s 2025-06-03 15:21:15.211176 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-03 15:21:16.888848 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:21:16.889005 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:21:16.889023 | orchestrator | Registering Redlock._release_script 2025-06-03 15:21:16.949290 | orchestrator | 2025-06-03 15:21:16 | INFO  | Task 6597e82d-f9d3-4292-96ca-75ec49870af3 (wait-for-connection) was prepared for execution. 2025-06-03 15:21:16.949398 | orchestrator | 2025-06-03 15:21:16 | INFO  | It takes a moment until task 6597e82d-f9d3-4292-96ca-75ec49870af3 (wait-for-connection) has been started and output is visible here. 2025-06-03 15:21:21.155657 | orchestrator | 2025-06-03 15:21:21.156635 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-03 15:21:21.159992 | orchestrator | 2025-06-03 15:21:21.160028 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-03 15:21:21.160041 | orchestrator | Tuesday 03 June 2025 15:21:21 +0000 (0:00:00.285) 0:00:00.285 ********** 2025-06-03 15:21:33.775819 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:21:33.775959 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:21:33.775976 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:21:33.776052 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:21:33.777576 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:21:33.778597 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:21:33.778910 | orchestrator | 2025-06-03 15:21:33.779877 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:21:33.780459 | orchestrator | 2025-06-03 15:21:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:21:33.780558 | orchestrator | 2025-06-03 15:21:33 | INFO  | Please wait and do not abort execution. 2025-06-03 15:21:33.782253 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:21:33.783277 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:21:33.783544 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:21:33.784270 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:21:33.784569 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:21:33.785101 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:21:33.785754 | orchestrator | 2025-06-03 15:21:33.786089 | orchestrator | 2025-06-03 15:21:33.786653 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:21:33.787103 | orchestrator | Tuesday 03 June 2025 15:21:33 +0000 (0:00:12.617) 0:00:12.903 ********** 2025-06-03 15:21:33.787518 | orchestrator | =============================================================================== 2025-06-03 15:21:33.788300 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.62s 2025-06-03 15:21:34.425281 | orchestrator | + osism apply hddtemp 2025-06-03 15:21:36.181259 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:21:36.181352 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:21:36.181365 | orchestrator | Registering Redlock._release_script 2025-06-03 15:21:36.244298 | orchestrator | 2025-06-03 15:21:36 | INFO  | Task 37c91291-a785-4ecf-aef8-9898747ff6b7 (hddtemp) was prepared for execution. 2025-06-03 15:21:36.244390 | orchestrator | 2025-06-03 15:21:36 | INFO  | It takes a moment until task 37c91291-a785-4ecf-aef8-9898747ff6b7 (hddtemp) has been started and output is visible here. 2025-06-03 15:21:40.692608 | orchestrator | 2025-06-03 15:21:40.693352 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-03 15:21:40.694690 | orchestrator | 2025-06-03 15:21:40.699889 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-03 15:21:40.700502 | orchestrator | Tuesday 03 June 2025 15:21:40 +0000 (0:00:00.295) 0:00:00.295 ********** 2025-06-03 15:21:40.848062 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:40.931435 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:21:41.010766 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:21:41.104546 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:21:41.301915 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:21:41.430843 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:21:41.432259 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:21:41.435893 | orchestrator | 2025-06-03 15:21:41.435936 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-03 15:21:41.435947 | orchestrator | Tuesday 03 June 2025 15:21:41 +0000 (0:00:00.738) 0:00:01.034 ********** 2025-06-03 15:21:42.685276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:21:42.686274 | orchestrator | 2025-06-03 15:21:42.687281 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-03 15:21:42.688291 | orchestrator | Tuesday 03 June 2025 15:21:42 +0000 (0:00:01.251) 0:00:02.285 ********** 2025-06-03 15:21:44.634532 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:44.636394 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:21:44.637007 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:21:44.640081 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:21:44.642874 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:21:44.644474 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:21:44.645025 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:21:44.645752 | orchestrator | 2025-06-03 15:21:44.647083 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-03 15:21:44.647637 | orchestrator | Tuesday 03 June 2025 15:21:44 +0000 (0:00:01.951) 0:00:04.237 ********** 2025-06-03 15:21:45.274685 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:45.364067 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:21:45.814802 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:21:45.814914 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:21:45.816376 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:21:45.816629 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:21:45.818568 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:21:45.820257 | orchestrator | 2025-06-03 15:21:45.821354 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-03 15:21:45.822175 | orchestrator | Tuesday 03 June 2025 15:21:45 +0000 (0:00:01.178) 0:00:05.415 ********** 2025-06-03 15:21:46.959791 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:21:46.959897 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:21:46.961396 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:21:46.962225 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:21:46.963798 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:21:46.964949 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:46.965810 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:21:46.967718 | orchestrator | 2025-06-03 15:21:46.967816 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-03 15:21:46.968863 | orchestrator | Tuesday 03 June 2025 15:21:46 +0000 (0:00:01.149) 0:00:06.564 ********** 2025-06-03 15:21:47.412039 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:21:47.508017 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:21:47.591078 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:47.674926 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:21:47.807970 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:21:47.808679 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:21:47.809653 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:21:47.810852 | orchestrator | 2025-06-03 15:21:47.811132 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-03 15:21:47.812004 | orchestrator | Tuesday 03 June 2025 15:21:47 +0000 (0:00:00.845) 0:00:07.409 ********** 2025-06-03 15:22:00.255606 | orchestrator | changed: [testbed-manager] 2025-06-03 15:22:00.257263 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:22:00.257289 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:22:00.257300 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:22:00.260752 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:22:00.261598 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:22:00.262904 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:22:00.264418 | orchestrator | 2025-06-03 15:22:00.265281 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-03 15:22:00.266144 | orchestrator | Tuesday 03 June 2025 15:22:00 +0000 (0:00:12.447) 0:00:19.857 ********** 2025-06-03 15:22:01.678995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:22:01.680158 | orchestrator | 2025-06-03 15:22:01.683168 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-03 15:22:01.684890 | orchestrator | Tuesday 03 June 2025 15:22:01 +0000 (0:00:01.424) 0:00:21.282 ********** 2025-06-03 15:22:03.637884 | orchestrator | changed: [testbed-manager] 2025-06-03 15:22:03.640148 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:22:03.640883 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:22:03.643453 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:22:03.644277 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:22:03.645813 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:22:03.646532 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:22:03.647369 | orchestrator | 2025-06-03 15:22:03.648807 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:22:03.650327 | orchestrator | 2025-06-03 15:22:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:22:03.650411 | orchestrator | 2025-06-03 15:22:03 | INFO  | Please wait and do not abort execution. 2025-06-03 15:22:03.651126 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:22:03.652358 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:03.653338 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:03.654433 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:03.655454 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:03.656522 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:03.657478 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:03.658175 | orchestrator | 2025-06-03 15:22:03.659872 | orchestrator | 2025-06-03 15:22:03.659989 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:22:03.661044 | orchestrator | Tuesday 03 June 2025 15:22:03 +0000 (0:00:01.959) 0:00:23.241 ********** 2025-06-03 15:22:03.661455 | orchestrator | =============================================================================== 2025-06-03 15:22:03.662716 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.45s 2025-06-03 15:22:03.663293 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.96s 2025-06-03 15:22:03.664098 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.95s 2025-06-03 15:22:03.664379 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.42s 2025-06-03 15:22:03.665278 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.25s 2025-06-03 15:22:03.665642 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-06-03 15:22:03.666184 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.15s 2025-06-03 15:22:03.666910 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.85s 2025-06-03 15:22:03.667413 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.74s 2025-06-03 15:22:04.314822 | orchestrator | ++ semver 9.1.0 7.1.1 2025-06-03 15:22:04.370603 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-03 15:22:04.370703 | orchestrator | + sudo systemctl restart manager.service 2025-06-03 15:22:18.226280 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-03 15:22:18.226358 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-03 15:22:18.226373 | orchestrator | + local max_attempts=60 2025-06-03 15:22:18.226384 | orchestrator | + local name=ceph-ansible 2025-06-03 15:22:18.226396 | orchestrator | + local attempt_num=1 2025-06-03 15:22:18.226407 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:22:18.265117 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:22:18.265197 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:22:18.265219 | orchestrator | + sleep 5 2025-06-03 15:22:23.269764 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:22:23.305324 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:22:23.305434 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:22:23.305456 | orchestrator | + sleep 5 2025-06-03 15:22:28.307512 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:22:28.341938 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:22:28.342068 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:22:28.342085 | orchestrator | + sleep 5 2025-06-03 15:22:33.345847 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:22:33.385668 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:22:33.385759 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:22:33.385782 | orchestrator | + sleep 5 2025-06-03 15:22:38.390628 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:22:38.424375 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:22:38.424473 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:22:38.424489 | orchestrator | + sleep 5 2025-06-03 15:22:43.427690 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:22:43.466856 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:22:43.466942 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:22:43.466956 | orchestrator | + sleep 5 2025-06-03 15:22:48.471229 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:22:48.508076 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:22:48.508174 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:22:48.508190 | orchestrator | + sleep 5 2025-06-03 15:22:53.514674 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:22:53.566643 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:22:53.566728 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:22:53.566742 | orchestrator | + sleep 5 2025-06-03 15:22:58.569393 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:22:58.621664 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:22:58.621753 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:22:58.621768 | orchestrator | + sleep 5 2025-06-03 15:23:03.627428 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:03.667750 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:03.667815 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:03.667829 | orchestrator | + sleep 5 2025-06-03 15:23:08.671109 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:08.699481 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:08.699550 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:08.699563 | orchestrator | + sleep 5 2025-06-03 15:23:13.704837 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:13.735307 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:13.735411 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:13.735426 | orchestrator | + sleep 5 2025-06-03 15:23:18.739743 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:18.787150 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:18.787219 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:18.787233 | orchestrator | + sleep 5 2025-06-03 15:23:23.792128 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:23.836176 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:23.836275 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-03 15:23:23.836321 | orchestrator | + local max_attempts=60 2025-06-03 15:23:23.836333 | orchestrator | + local name=kolla-ansible 2025-06-03 15:23:23.836414 | orchestrator | + local attempt_num=1 2025-06-03 15:23:23.836852 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-03 15:23:23.879126 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:23.879217 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-03 15:23:23.879231 | orchestrator | + local max_attempts=60 2025-06-03 15:23:23.879242 | orchestrator | + local name=osism-ansible 2025-06-03 15:23:23.879253 | orchestrator | + local attempt_num=1 2025-06-03 15:23:23.880216 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-03 15:23:23.911129 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:23.911225 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-03 15:23:23.911240 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-03 15:23:24.099178 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-03 15:23:24.242560 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-03 15:23:24.384103 | orchestrator | ARA in osism-ansible already disabled. 2025-06-03 15:23:24.552223 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-03 15:23:24.552649 | orchestrator | + osism apply gather-facts 2025-06-03 15:23:26.353482 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:23:26.353575 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:23:26.353587 | orchestrator | Registering Redlock._release_script 2025-06-03 15:23:26.412257 | orchestrator | 2025-06-03 15:23:26 | INFO  | Task 68aeaf6b-9afa-47e3-bebd-561a110ccc71 (gather-facts) was prepared for execution. 2025-06-03 15:23:26.412397 | orchestrator | 2025-06-03 15:23:26 | INFO  | It takes a moment until task 68aeaf6b-9afa-47e3-bebd-561a110ccc71 (gather-facts) has been started and output is visible here. 2025-06-03 15:23:30.448690 | orchestrator | 2025-06-03 15:23:30.448783 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 15:23:30.449180 | orchestrator | 2025-06-03 15:23:30.450194 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:23:30.453651 | orchestrator | Tuesday 03 June 2025 15:23:30 +0000 (0:00:00.242) 0:00:00.242 ********** 2025-06-03 15:23:36.199956 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:23:36.200090 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:23:36.201353 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:23:36.201746 | orchestrator | ok: [testbed-manager] 2025-06-03 15:23:36.204026 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:23:36.206962 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:23:36.207075 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:23:36.207609 | orchestrator | 2025-06-03 15:23:36.207755 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-03 15:23:36.208626 | orchestrator | 2025-06-03 15:23:36.208693 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-03 15:23:36.208751 | orchestrator | Tuesday 03 June 2025 15:23:36 +0000 (0:00:05.757) 0:00:06.000 ********** 2025-06-03 15:23:36.355926 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:23:36.432530 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:23:36.512177 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:23:36.589462 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:23:36.664346 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:23:36.708343 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:23:36.708642 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:23:36.709680 | orchestrator | 2025-06-03 15:23:36.711538 | orchestrator | 2025-06-03 15:23:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:23:36.711589 | orchestrator | 2025-06-03 15:23:36 | INFO  | Please wait and do not abort execution. 2025-06-03 15:23:36.711716 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:23:36.712278 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:23:36.713128 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:23:36.713854 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:23:36.714962 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:23:36.715647 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:23:36.716311 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:23:36.716741 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:23:36.717357 | orchestrator | 2025-06-03 15:23:36.718135 | orchestrator | 2025-06-03 15:23:36.718575 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:23:36.719078 | orchestrator | Tuesday 03 June 2025 15:23:36 +0000 (0:00:00.510) 0:00:06.510 ********** 2025-06-03 15:23:36.720079 | orchestrator | =============================================================================== 2025-06-03 15:23:36.721659 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.76s 2025-06-03 15:23:36.722867 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-06-03 15:23:37.398570 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-03 15:23:37.414557 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-03 15:23:37.425123 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-03 15:23:37.437659 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-03 15:23:37.454904 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-03 15:23:37.467421 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-03 15:23:37.478888 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-03 15:23:37.498675 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-03 15:23:37.513970 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-03 15:23:37.527973 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-03 15:23:37.539995 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-03 15:23:37.553118 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-03 15:23:37.567883 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-03 15:23:37.580050 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-03 15:23:37.596832 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-03 15:23:37.609422 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-03 15:23:37.623807 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-03 15:23:37.642689 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-03 15:23:37.660315 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-03 15:23:37.680513 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-03 15:23:37.697259 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-03 15:23:38.018445 | orchestrator | ok: Runtime: 0:20:06.845420 2025-06-03 15:23:38.119057 | 2025-06-03 15:23:38.119181 | TASK [Deploy services] 2025-06-03 15:23:38.652097 | orchestrator | skipping: Conditional result was False 2025-06-03 15:23:38.671640 | 2025-06-03 15:23:38.671844 | TASK [Deploy in a nutshell] 2025-06-03 15:23:39.392547 | orchestrator | + set -e 2025-06-03 15:23:39.394245 | orchestrator | 2025-06-03 15:23:39.394285 | orchestrator | # PULL IMAGES 2025-06-03 15:23:39.394301 | orchestrator | 2025-06-03 15:23:39.394321 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 15:23:39.394342 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 15:23:39.394357 | orchestrator | ++ INTERACTIVE=false 2025-06-03 15:23:39.394459 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 15:23:39.394488 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 15:23:39.394504 | orchestrator | + source /opt/manager-vars.sh 2025-06-03 15:23:39.394516 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-03 15:23:39.394535 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-03 15:23:39.394547 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-03 15:23:39.394565 | orchestrator | ++ CEPH_VERSION=reef 2025-06-03 15:23:39.394577 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-03 15:23:39.394596 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-03 15:23:39.394607 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-03 15:23:39.394621 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-03 15:23:39.394633 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-03 15:23:39.394649 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-03 15:23:39.394660 | orchestrator | ++ export ARA=false 2025-06-03 15:23:39.394671 | orchestrator | ++ ARA=false 2025-06-03 15:23:39.394682 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-03 15:23:39.394694 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-03 15:23:39.394705 | orchestrator | ++ export TEMPEST=false 2025-06-03 15:23:39.394716 | orchestrator | ++ TEMPEST=false 2025-06-03 15:23:39.394727 | orchestrator | ++ export IS_ZUUL=true 2025-06-03 15:23:39.394738 | orchestrator | ++ IS_ZUUL=true 2025-06-03 15:23:39.394750 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 15:23:39.394761 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 15:23:39.394772 | orchestrator | ++ export EXTERNAL_API=false 2025-06-03 15:23:39.394783 | orchestrator | ++ EXTERNAL_API=false 2025-06-03 15:23:39.394794 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-03 15:23:39.394806 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-03 15:23:39.394817 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-03 15:23:39.394828 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-03 15:23:39.394839 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-03 15:23:39.394858 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-03 15:23:39.394870 | orchestrator | + echo 2025-06-03 15:23:39.394881 | orchestrator | + echo '# PULL IMAGES' 2025-06-03 15:23:39.394892 | orchestrator | + echo 2025-06-03 15:23:39.394911 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-03 15:23:39.463886 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-03 15:23:39.463952 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-03 15:23:40.949441 | orchestrator | 2025-06-03 15:23:40 | INFO  | Trying to run play pull-images in environment custom 2025-06-03 15:23:40.953574 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:23:40.953613 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:23:40.953623 | orchestrator | Registering Redlock._release_script 2025-06-03 15:23:41.004901 | orchestrator | 2025-06-03 15:23:41 | INFO  | Task 44267028-6650-4cb0-8fd3-7ffe1c9b0bb6 (pull-images) was prepared for execution. 2025-06-03 15:23:41.005004 | orchestrator | 2025-06-03 15:23:41 | INFO  | It takes a moment until task 44267028-6650-4cb0-8fd3-7ffe1c9b0bb6 (pull-images) has been started and output is visible here. 2025-06-03 15:23:45.025875 | orchestrator | 2025-06-03 15:23:45.025995 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-03 15:23:45.026013 | orchestrator | 2025-06-03 15:23:45.027375 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-03 15:23:45.028086 | orchestrator | Tuesday 03 June 2025 15:23:45 +0000 (0:00:00.152) 0:00:00.152 ********** 2025-06-03 15:24:54.011902 | orchestrator | changed: [testbed-manager] 2025-06-03 15:24:54.012048 | orchestrator | 2025-06-03 15:24:54.012068 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-03 15:24:54.012278 | orchestrator | Tuesday 03 June 2025 15:24:54 +0000 (0:01:08.987) 0:01:09.140 ********** 2025-06-03 15:25:48.361904 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-03 15:25:48.362108 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-03 15:25:48.362142 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-03 15:25:48.363093 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-03 15:25:48.363539 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-03 15:25:48.363700 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-03 15:25:48.364492 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-03 15:25:48.365449 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-03 15:25:48.365661 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-03 15:25:48.366501 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-03 15:25:48.366863 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-03 15:25:48.367019 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-03 15:25:48.367532 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-03 15:25:48.368014 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-03 15:25:48.368431 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-03 15:25:48.368869 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-03 15:25:48.369322 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-03 15:25:48.369838 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-03 15:25:48.370301 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-03 15:25:48.370976 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-03 15:25:48.371037 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-03 15:25:48.371422 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-03 15:25:48.371696 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-03 15:25:48.372044 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-03 15:25:48.372492 | orchestrator | 2025-06-03 15:25:48.372654 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:25:48.373145 | orchestrator | 2025-06-03 15:25:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:25:48.373162 | orchestrator | 2025-06-03 15:25:48 | INFO  | Please wait and do not abort execution. 2025-06-03 15:25:48.373733 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:25:48.373999 | orchestrator | 2025-06-03 15:25:48.374284 | orchestrator | 2025-06-03 15:25:48.374648 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:25:48.374908 | orchestrator | Tuesday 03 June 2025 15:25:48 +0000 (0:00:54.353) 0:02:03.493 ********** 2025-06-03 15:25:48.375193 | orchestrator | =============================================================================== 2025-06-03 15:25:48.375458 | orchestrator | Pull keystone image ---------------------------------------------------- 68.99s 2025-06-03 15:25:48.375729 | orchestrator | Pull other images ------------------------------------------------------ 54.35s 2025-06-03 15:25:50.665027 | orchestrator | 2025-06-03 15:25:50 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-03 15:25:50.671179 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:25:50.671241 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:25:50.671255 | orchestrator | Registering Redlock._release_script 2025-06-03 15:25:50.731885 | orchestrator | 2025-06-03 15:25:50 | INFO  | Task 489bae28-7887-49bd-abef-ac8a0d4556c3 (wipe-partitions) was prepared for execution. 2025-06-03 15:25:50.731972 | orchestrator | 2025-06-03 15:25:50 | INFO  | It takes a moment until task 489bae28-7887-49bd-abef-ac8a0d4556c3 (wipe-partitions) has been started and output is visible here. 2025-06-03 15:25:54.850091 | orchestrator | 2025-06-03 15:25:54.850221 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-03 15:25:54.850241 | orchestrator | 2025-06-03 15:25:54.850276 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-03 15:25:54.850302 | orchestrator | Tuesday 03 June 2025 15:25:54 +0000 (0:00:00.137) 0:00:00.137 ********** 2025-06-03 15:25:55.479082 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:25:55.479560 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:25:55.479900 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:25:55.480150 | orchestrator | 2025-06-03 15:25:55.480558 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-03 15:25:55.480937 | orchestrator | Tuesday 03 June 2025 15:25:55 +0000 (0:00:00.632) 0:00:00.770 ********** 2025-06-03 15:25:55.672134 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:25:55.766247 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:25:55.766323 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:25:55.767131 | orchestrator | 2025-06-03 15:25:55.767691 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-03 15:25:55.767926 | orchestrator | Tuesday 03 June 2025 15:25:55 +0000 (0:00:00.284) 0:00:01.055 ********** 2025-06-03 15:25:56.636534 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:25:56.636640 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:25:56.636725 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:25:56.636880 | orchestrator | 2025-06-03 15:25:56.637435 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-03 15:25:56.637858 | orchestrator | Tuesday 03 June 2025 15:25:56 +0000 (0:00:00.862) 0:00:01.918 ********** 2025-06-03 15:25:56.851192 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:25:56.978218 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:25:56.978318 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:25:56.978545 | orchestrator | 2025-06-03 15:25:56.978789 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-03 15:25:56.981660 | orchestrator | Tuesday 03 June 2025 15:25:56 +0000 (0:00:00.351) 0:00:02.269 ********** 2025-06-03 15:25:58.220045 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-03 15:25:58.220331 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-03 15:25:58.220464 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-03 15:25:58.220783 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-03 15:25:58.221013 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-03 15:25:58.221439 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-03 15:25:58.221755 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-03 15:25:58.222126 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-03 15:25:58.222492 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-03 15:25:58.222844 | orchestrator | 2025-06-03 15:25:58.223208 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-03 15:25:58.223583 | orchestrator | Tuesday 03 June 2025 15:25:58 +0000 (0:00:01.239) 0:00:03.508 ********** 2025-06-03 15:25:59.581846 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-03 15:25:59.581955 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-03 15:25:59.582593 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-03 15:25:59.582958 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-03 15:25:59.583643 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-03 15:25:59.585536 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-03 15:25:59.585736 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-03 15:25:59.585760 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-03 15:25:59.586535 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-03 15:25:59.588562 | orchestrator | 2025-06-03 15:25:59.588670 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-03 15:25:59.589294 | orchestrator | Tuesday 03 June 2025 15:25:59 +0000 (0:00:01.361) 0:00:04.870 ********** 2025-06-03 15:26:01.746690 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-03 15:26:01.746800 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-03 15:26:01.747003 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-03 15:26:01.748727 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-03 15:26:01.748955 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-03 15:26:01.749680 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-03 15:26:01.751894 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-03 15:26:01.752916 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-03 15:26:01.753073 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-03 15:26:01.754610 | orchestrator | 2025-06-03 15:26:01.754654 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-03 15:26:01.755600 | orchestrator | Tuesday 03 June 2025 15:26:01 +0000 (0:00:02.168) 0:00:07.039 ********** 2025-06-03 15:26:02.341801 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:26:02.342269 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:26:02.342606 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:26:02.342845 | orchestrator | 2025-06-03 15:26:02.343276 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-03 15:26:02.344504 | orchestrator | Tuesday 03 June 2025 15:26:02 +0000 (0:00:00.594) 0:00:07.633 ********** 2025-06-03 15:26:03.017116 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:26:03.017246 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:26:03.017523 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:26:03.018003 | orchestrator | 2025-06-03 15:26:03.021890 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:26:03.021940 | orchestrator | 2025-06-03 15:26:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:26:03.021959 | orchestrator | 2025-06-03 15:26:03 | INFO  | Please wait and do not abort execution. 2025-06-03 15:26:03.022245 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:03.022740 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:03.023085 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:03.023505 | orchestrator | 2025-06-03 15:26:03.024000 | orchestrator | 2025-06-03 15:26:03.024509 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:26:03.024834 | orchestrator | Tuesday 03 June 2025 15:26:03 +0000 (0:00:00.672) 0:00:08.306 ********** 2025-06-03 15:26:03.025192 | orchestrator | =============================================================================== 2025-06-03 15:26:03.025320 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.17s 2025-06-03 15:26:03.025664 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2025-06-03 15:26:03.025927 | orchestrator | Check device availability ----------------------------------------------- 1.24s 2025-06-03 15:26:03.026353 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.86s 2025-06-03 15:26:03.026584 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2025-06-03 15:26:03.026791 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.63s 2025-06-03 15:26:03.026989 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-06-03 15:26:03.027387 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.35s 2025-06-03 15:26:03.029442 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2025-06-03 15:26:04.695250 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:26:04.695356 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:26:04.695437 | orchestrator | Registering Redlock._release_script 2025-06-03 15:26:04.741564 | orchestrator | 2025-06-03 15:26:04 | INFO  | Task 682148ae-eb13-457f-a6ee-ee0e5341eb7a (facts) was prepared for execution. 2025-06-03 15:26:04.741661 | orchestrator | 2025-06-03 15:26:04 | INFO  | It takes a moment until task 682148ae-eb13-457f-a6ee-ee0e5341eb7a (facts) has been started and output is visible here. 2025-06-03 15:26:08.633894 | orchestrator | 2025-06-03 15:26:08.634007 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-03 15:26:08.634080 | orchestrator | 2025-06-03 15:26:08.636596 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-03 15:26:08.637656 | orchestrator | Tuesday 03 June 2025 15:26:08 +0000 (0:00:00.239) 0:00:00.239 ********** 2025-06-03 15:26:09.270221 | orchestrator | ok: [testbed-manager] 2025-06-03 15:26:09.735727 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:26:09.739282 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:26:09.740435 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:26:09.741386 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:26:09.742631 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:26:09.744177 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:26:09.745866 | orchestrator | 2025-06-03 15:26:09.747345 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-03 15:26:09.747689 | orchestrator | Tuesday 03 June 2025 15:26:09 +0000 (0:00:01.101) 0:00:01.340 ********** 2025-06-03 15:26:09.896446 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:26:09.972018 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:26:10.065565 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:26:10.211755 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:26:10.342570 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:11.184080 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:11.184170 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:11.184181 | orchestrator | 2025-06-03 15:26:11.184191 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 15:26:11.184200 | orchestrator | 2025-06-03 15:26:11.186214 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:26:11.186686 | orchestrator | Tuesday 03 June 2025 15:26:11 +0000 (0:00:01.448) 0:00:02.789 ********** 2025-06-03 15:26:13.473145 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:26:17.627806 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:26:17.628867 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:26:17.633816 | orchestrator | ok: [testbed-manager] 2025-06-03 15:26:17.634949 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:26:17.635720 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:26:17.637576 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:26:17.638698 | orchestrator | 2025-06-03 15:26:17.639610 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-03 15:26:17.640347 | orchestrator | 2025-06-03 15:26:17.644129 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-03 15:26:17.644760 | orchestrator | Tuesday 03 June 2025 15:26:17 +0000 (0:00:06.444) 0:00:09.234 ********** 2025-06-03 15:26:17.783158 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:26:17.859093 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:26:17.930664 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:26:18.008140 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:26:18.085000 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:18.133079 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:18.133167 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:18.133938 | orchestrator | 2025-06-03 15:26:18.135081 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:26:18.136074 | orchestrator | 2025-06-03 15:26:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:26:18.136604 | orchestrator | 2025-06-03 15:26:18 | INFO  | Please wait and do not abort execution. 2025-06-03 15:26:18.137919 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:18.138755 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:18.139666 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:18.140489 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:18.141710 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:18.141736 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:18.142624 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:18.143127 | orchestrator | 2025-06-03 15:26:18.144527 | orchestrator | 2025-06-03 15:26:18.145302 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:26:18.146079 | orchestrator | Tuesday 03 June 2025 15:26:18 +0000 (0:00:00.504) 0:00:09.739 ********** 2025-06-03 15:26:18.146570 | orchestrator | =============================================================================== 2025-06-03 15:26:18.147292 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.44s 2025-06-03 15:26:18.148003 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.45s 2025-06-03 15:26:18.148504 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-06-03 15:26:18.149503 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-06-03 15:26:20.696795 | orchestrator | 2025-06-03 15:26:20 | INFO  | Task 4fc6d008-dc39-40c2-b990-7d0f62a72aee (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-03 15:26:20.696912 | orchestrator | 2025-06-03 15:26:20 | INFO  | It takes a moment until task 4fc6d008-dc39-40c2-b990-7d0f62a72aee (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-03 15:26:24.813895 | orchestrator | 2025-06-03 15:26:24.814728 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-03 15:26:24.815426 | orchestrator | 2025-06-03 15:26:24.815879 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:26:24.816799 | orchestrator | Tuesday 03 June 2025 15:26:24 +0000 (0:00:00.299) 0:00:00.299 ********** 2025-06-03 15:26:25.048362 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 15:26:25.048489 | orchestrator | 2025-06-03 15:26:25.048507 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:26:25.048935 | orchestrator | Tuesday 03 June 2025 15:26:25 +0000 (0:00:00.236) 0:00:00.535 ********** 2025-06-03 15:26:25.253920 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:26:25.254055 | orchestrator | 2025-06-03 15:26:25.254444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:25.254518 | orchestrator | Tuesday 03 June 2025 15:26:25 +0000 (0:00:00.207) 0:00:00.742 ********** 2025-06-03 15:26:25.583604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-03 15:26:25.584196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-03 15:26:25.584937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-03 15:26:25.585514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-03 15:26:25.586870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-03 15:26:25.587220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-03 15:26:25.587503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-03 15:26:25.588735 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-03 15:26:25.589317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-03 15:26:25.589817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-03 15:26:25.590562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-03 15:26:25.590966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-03 15:26:25.591574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-03 15:26:25.591978 | orchestrator | 2025-06-03 15:26:25.592535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:25.593301 | orchestrator | Tuesday 03 June 2025 15:26:25 +0000 (0:00:00.328) 0:00:01.070 ********** 2025-06-03 15:26:26.009720 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:26.011773 | orchestrator | 2025-06-03 15:26:26.012594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:26.012899 | orchestrator | Tuesday 03 June 2025 15:26:26 +0000 (0:00:00.426) 0:00:01.497 ********** 2025-06-03 15:26:26.180673 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:26.183606 | orchestrator | 2025-06-03 15:26:26.183694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:26.184054 | orchestrator | Tuesday 03 June 2025 15:26:26 +0000 (0:00:00.169) 0:00:01.667 ********** 2025-06-03 15:26:26.350985 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:26.351805 | orchestrator | 2025-06-03 15:26:26.353232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:26.354138 | orchestrator | Tuesday 03 June 2025 15:26:26 +0000 (0:00:00.170) 0:00:01.837 ********** 2025-06-03 15:26:26.517912 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:26.518166 | orchestrator | 2025-06-03 15:26:26.518259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:26.518497 | orchestrator | Tuesday 03 June 2025 15:26:26 +0000 (0:00:00.167) 0:00:02.005 ********** 2025-06-03 15:26:26.704275 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:26.705253 | orchestrator | 2025-06-03 15:26:26.706256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:26.707354 | orchestrator | Tuesday 03 June 2025 15:26:26 +0000 (0:00:00.187) 0:00:02.192 ********** 2025-06-03 15:26:26.891098 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:26.891286 | orchestrator | 2025-06-03 15:26:26.891998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:26.892862 | orchestrator | Tuesday 03 June 2025 15:26:26 +0000 (0:00:00.183) 0:00:02.376 ********** 2025-06-03 15:26:27.056753 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:27.057647 | orchestrator | 2025-06-03 15:26:27.057928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:27.059494 | orchestrator | Tuesday 03 June 2025 15:26:27 +0000 (0:00:00.168) 0:00:02.544 ********** 2025-06-03 15:26:27.250499 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:27.251336 | orchestrator | 2025-06-03 15:26:27.252325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:27.253264 | orchestrator | Tuesday 03 June 2025 15:26:27 +0000 (0:00:00.194) 0:00:02.738 ********** 2025-06-03 15:26:27.679510 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509) 2025-06-03 15:26:27.680265 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509) 2025-06-03 15:26:27.681110 | orchestrator | 2025-06-03 15:26:27.682136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:27.682595 | orchestrator | Tuesday 03 June 2025 15:26:27 +0000 (0:00:00.428) 0:00:03.167 ********** 2025-06-03 15:26:28.050463 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e) 2025-06-03 15:26:28.050596 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e) 2025-06-03 15:26:28.051936 | orchestrator | 2025-06-03 15:26:28.052675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:28.053062 | orchestrator | Tuesday 03 June 2025 15:26:28 +0000 (0:00:00.369) 0:00:03.537 ********** 2025-06-03 15:26:28.638157 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2) 2025-06-03 15:26:28.638488 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2) 2025-06-03 15:26:28.639185 | orchestrator | 2025-06-03 15:26:28.639522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:28.640230 | orchestrator | Tuesday 03 June 2025 15:26:28 +0000 (0:00:00.589) 0:00:04.126 ********** 2025-06-03 15:26:29.204133 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5) 2025-06-03 15:26:29.205637 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5) 2025-06-03 15:26:29.207638 | orchestrator | 2025-06-03 15:26:29.208214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:29.209712 | orchestrator | Tuesday 03 June 2025 15:26:29 +0000 (0:00:00.562) 0:00:04.689 ********** 2025-06-03 15:26:29.838494 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:26:29.839214 | orchestrator | 2025-06-03 15:26:29.840471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:29.842134 | orchestrator | Tuesday 03 June 2025 15:26:29 +0000 (0:00:00.633) 0:00:05.323 ********** 2025-06-03 15:26:30.192265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-03 15:26:30.192368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-03 15:26:30.193147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-03 15:26:30.194096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-03 15:26:30.194493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-03 15:26:30.195570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-03 15:26:30.195699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-03 15:26:30.196137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-03 15:26:30.197159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-03 15:26:30.198065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-03 15:26:30.198372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-03 15:26:30.198498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-03 15:26:30.199321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-03 15:26:30.199778 | orchestrator | 2025-06-03 15:26:30.200639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:30.200920 | orchestrator | Tuesday 03 June 2025 15:26:30 +0000 (0:00:00.353) 0:00:05.677 ********** 2025-06-03 15:26:30.372067 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:30.373214 | orchestrator | 2025-06-03 15:26:30.373247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:30.373261 | orchestrator | Tuesday 03 June 2025 15:26:30 +0000 (0:00:00.184) 0:00:05.861 ********** 2025-06-03 15:26:30.583782 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:30.583993 | orchestrator | 2025-06-03 15:26:30.584279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:30.587217 | orchestrator | Tuesday 03 June 2025 15:26:30 +0000 (0:00:00.208) 0:00:06.069 ********** 2025-06-03 15:26:30.786308 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:30.786839 | orchestrator | 2025-06-03 15:26:30.787236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:30.787764 | orchestrator | Tuesday 03 June 2025 15:26:30 +0000 (0:00:00.202) 0:00:06.272 ********** 2025-06-03 15:26:30.988530 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:30.988719 | orchestrator | 2025-06-03 15:26:30.989076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:30.989711 | orchestrator | Tuesday 03 June 2025 15:26:30 +0000 (0:00:00.204) 0:00:06.477 ********** 2025-06-03 15:26:31.276734 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:31.276784 | orchestrator | 2025-06-03 15:26:31.277688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:31.280160 | orchestrator | Tuesday 03 June 2025 15:26:31 +0000 (0:00:00.282) 0:00:06.759 ********** 2025-06-03 15:26:31.438605 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:31.438701 | orchestrator | 2025-06-03 15:26:31.439719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:31.441126 | orchestrator | Tuesday 03 June 2025 15:26:31 +0000 (0:00:00.167) 0:00:06.927 ********** 2025-06-03 15:26:31.636875 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:31.637030 | orchestrator | 2025-06-03 15:26:31.637160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:31.638854 | orchestrator | Tuesday 03 June 2025 15:26:31 +0000 (0:00:00.194) 0:00:07.122 ********** 2025-06-03 15:26:31.850989 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:31.851086 | orchestrator | 2025-06-03 15:26:31.851101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:31.851113 | orchestrator | Tuesday 03 June 2025 15:26:31 +0000 (0:00:00.212) 0:00:07.334 ********** 2025-06-03 15:26:32.743730 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-03 15:26:32.744236 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-03 15:26:32.745123 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-03 15:26:32.745769 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-03 15:26:32.748564 | orchestrator | 2025-06-03 15:26:32.750540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:32.751951 | orchestrator | Tuesday 03 June 2025 15:26:32 +0000 (0:00:00.895) 0:00:08.229 ********** 2025-06-03 15:26:32.956566 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:32.957681 | orchestrator | 2025-06-03 15:26:32.957775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:32.958644 | orchestrator | Tuesday 03 June 2025 15:26:32 +0000 (0:00:00.212) 0:00:08.442 ********** 2025-06-03 15:26:33.125292 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:33.126359 | orchestrator | 2025-06-03 15:26:33.126675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:33.127579 | orchestrator | Tuesday 03 June 2025 15:26:33 +0000 (0:00:00.167) 0:00:08.609 ********** 2025-06-03 15:26:33.301509 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:33.301745 | orchestrator | 2025-06-03 15:26:33.302010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:33.302837 | orchestrator | Tuesday 03 June 2025 15:26:33 +0000 (0:00:00.180) 0:00:08.790 ********** 2025-06-03 15:26:33.491999 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:33.493262 | orchestrator | 2025-06-03 15:26:33.494206 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-03 15:26:33.494876 | orchestrator | Tuesday 03 June 2025 15:26:33 +0000 (0:00:00.189) 0:00:08.979 ********** 2025-06-03 15:26:33.685989 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-03 15:26:33.688097 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-03 15:26:33.688445 | orchestrator | 2025-06-03 15:26:33.690207 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-03 15:26:33.690248 | orchestrator | Tuesday 03 June 2025 15:26:33 +0000 (0:00:00.191) 0:00:09.171 ********** 2025-06-03 15:26:33.818433 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:33.818530 | orchestrator | 2025-06-03 15:26:33.819430 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-03 15:26:33.820377 | orchestrator | Tuesday 03 June 2025 15:26:33 +0000 (0:00:00.132) 0:00:09.303 ********** 2025-06-03 15:26:33.939826 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:33.941291 | orchestrator | 2025-06-03 15:26:33.942237 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-03 15:26:33.943349 | orchestrator | Tuesday 03 June 2025 15:26:33 +0000 (0:00:00.124) 0:00:09.428 ********** 2025-06-03 15:26:34.076309 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:34.077007 | orchestrator | 2025-06-03 15:26:34.077656 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-03 15:26:34.078167 | orchestrator | Tuesday 03 June 2025 15:26:34 +0000 (0:00:00.132) 0:00:09.560 ********** 2025-06-03 15:26:34.212705 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:26:34.212853 | orchestrator | 2025-06-03 15:26:34.212990 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-03 15:26:34.213326 | orchestrator | Tuesday 03 June 2025 15:26:34 +0000 (0:00:00.138) 0:00:09.699 ********** 2025-06-03 15:26:34.396624 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a5276575-f764-5428-894d-d125091c496f'}}) 2025-06-03 15:26:34.397191 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6a443cc3-e60d-5588-869b-39e93dfe07d6'}}) 2025-06-03 15:26:34.397219 | orchestrator | 2025-06-03 15:26:34.397230 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-03 15:26:34.398287 | orchestrator | Tuesday 03 June 2025 15:26:34 +0000 (0:00:00.184) 0:00:09.883 ********** 2025-06-03 15:26:34.553523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a5276575-f764-5428-894d-d125091c496f'}})  2025-06-03 15:26:34.554317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6a443cc3-e60d-5588-869b-39e93dfe07d6'}})  2025-06-03 15:26:34.555439 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:34.556683 | orchestrator | 2025-06-03 15:26:34.558358 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-03 15:26:34.560004 | orchestrator | Tuesday 03 June 2025 15:26:34 +0000 (0:00:00.157) 0:00:10.041 ********** 2025-06-03 15:26:34.863176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a5276575-f764-5428-894d-d125091c496f'}})  2025-06-03 15:26:34.863812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6a443cc3-e60d-5588-869b-39e93dfe07d6'}})  2025-06-03 15:26:34.866155 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:34.866193 | orchestrator | 2025-06-03 15:26:34.867058 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-03 15:26:34.868306 | orchestrator | Tuesday 03 June 2025 15:26:34 +0000 (0:00:00.309) 0:00:10.351 ********** 2025-06-03 15:26:35.004258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a5276575-f764-5428-894d-d125091c496f'}})  2025-06-03 15:26:35.004989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6a443cc3-e60d-5588-869b-39e93dfe07d6'}})  2025-06-03 15:26:35.006938 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:35.007816 | orchestrator | 2025-06-03 15:26:35.009331 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-03 15:26:35.010314 | orchestrator | Tuesday 03 June 2025 15:26:34 +0000 (0:00:00.139) 0:00:10.491 ********** 2025-06-03 15:26:35.132066 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:26:35.135057 | orchestrator | 2025-06-03 15:26:35.136256 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-03 15:26:35.136897 | orchestrator | Tuesday 03 June 2025 15:26:35 +0000 (0:00:00.128) 0:00:10.619 ********** 2025-06-03 15:26:35.264570 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:26:35.264667 | orchestrator | 2025-06-03 15:26:35.269494 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-03 15:26:35.269561 | orchestrator | Tuesday 03 June 2025 15:26:35 +0000 (0:00:00.132) 0:00:10.752 ********** 2025-06-03 15:26:35.384352 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:35.387680 | orchestrator | 2025-06-03 15:26:35.388159 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-03 15:26:35.388208 | orchestrator | Tuesday 03 June 2025 15:26:35 +0000 (0:00:00.120) 0:00:10.873 ********** 2025-06-03 15:26:35.495613 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:35.496598 | orchestrator | 2025-06-03 15:26:35.497788 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-03 15:26:35.499114 | orchestrator | Tuesday 03 June 2025 15:26:35 +0000 (0:00:00.108) 0:00:10.981 ********** 2025-06-03 15:26:35.634335 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:35.635551 | orchestrator | 2025-06-03 15:26:35.637952 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-03 15:26:35.638541 | orchestrator | Tuesday 03 June 2025 15:26:35 +0000 (0:00:00.138) 0:00:11.120 ********** 2025-06-03 15:26:35.760857 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:26:35.760976 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:26:35.761077 | orchestrator |  "sdb": { 2025-06-03 15:26:35.763026 | orchestrator |  "osd_lvm_uuid": "a5276575-f764-5428-894d-d125091c496f" 2025-06-03 15:26:35.764124 | orchestrator |  }, 2025-06-03 15:26:35.765444 | orchestrator |  "sdc": { 2025-06-03 15:26:35.766352 | orchestrator |  "osd_lvm_uuid": "6a443cc3-e60d-5588-869b-39e93dfe07d6" 2025-06-03 15:26:35.768601 | orchestrator |  } 2025-06-03 15:26:35.769176 | orchestrator |  } 2025-06-03 15:26:35.770745 | orchestrator | } 2025-06-03 15:26:35.771658 | orchestrator | 2025-06-03 15:26:35.773140 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-03 15:26:35.773338 | orchestrator | Tuesday 03 June 2025 15:26:35 +0000 (0:00:00.128) 0:00:11.248 ********** 2025-06-03 15:26:35.878677 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:35.880526 | orchestrator | 2025-06-03 15:26:35.881502 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-03 15:26:35.882286 | orchestrator | Tuesday 03 June 2025 15:26:35 +0000 (0:00:00.115) 0:00:11.363 ********** 2025-06-03 15:26:35.989544 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:35.990841 | orchestrator | 2025-06-03 15:26:35.991700 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-03 15:26:35.992023 | orchestrator | Tuesday 03 June 2025 15:26:35 +0000 (0:00:00.113) 0:00:11.477 ********** 2025-06-03 15:26:36.098524 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:36.098631 | orchestrator | 2025-06-03 15:26:36.099191 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-03 15:26:36.100803 | orchestrator | Tuesday 03 June 2025 15:26:36 +0000 (0:00:00.108) 0:00:11.585 ********** 2025-06-03 15:26:36.294170 | orchestrator | changed: [testbed-node-3] => { 2025-06-03 15:26:36.294766 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-03 15:26:36.294999 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:26:36.296138 | orchestrator |  "sdb": { 2025-06-03 15:26:36.297083 | orchestrator |  "osd_lvm_uuid": "a5276575-f764-5428-894d-d125091c496f" 2025-06-03 15:26:36.298822 | orchestrator |  }, 2025-06-03 15:26:36.300347 | orchestrator |  "sdc": { 2025-06-03 15:26:36.300886 | orchestrator |  "osd_lvm_uuid": "6a443cc3-e60d-5588-869b-39e93dfe07d6" 2025-06-03 15:26:36.301591 | orchestrator |  } 2025-06-03 15:26:36.302247 | orchestrator |  }, 2025-06-03 15:26:36.302787 | orchestrator |  "lvm_volumes": [ 2025-06-03 15:26:36.303464 | orchestrator |  { 2025-06-03 15:26:36.304143 | orchestrator |  "data": "osd-block-a5276575-f764-5428-894d-d125091c496f", 2025-06-03 15:26:36.304743 | orchestrator |  "data_vg": "ceph-a5276575-f764-5428-894d-d125091c496f" 2025-06-03 15:26:36.305308 | orchestrator |  }, 2025-06-03 15:26:36.305999 | orchestrator |  { 2025-06-03 15:26:36.306628 | orchestrator |  "data": "osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6", 2025-06-03 15:26:36.306921 | orchestrator |  "data_vg": "ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6" 2025-06-03 15:26:36.307470 | orchestrator |  } 2025-06-03 15:26:36.308067 | orchestrator |  ] 2025-06-03 15:26:36.308512 | orchestrator |  } 2025-06-03 15:26:36.309092 | orchestrator | } 2025-06-03 15:26:36.309112 | orchestrator | 2025-06-03 15:26:36.309883 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-03 15:26:36.310145 | orchestrator | Tuesday 03 June 2025 15:26:36 +0000 (0:00:00.197) 0:00:11.782 ********** 2025-06-03 15:26:38.395241 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 15:26:38.396431 | orchestrator | 2025-06-03 15:26:38.397293 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-03 15:26:38.400361 | orchestrator | 2025-06-03 15:26:38.400413 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:26:38.400684 | orchestrator | Tuesday 03 June 2025 15:26:38 +0000 (0:00:02.099) 0:00:13.882 ********** 2025-06-03 15:26:38.684872 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-03 15:26:38.689251 | orchestrator | 2025-06-03 15:26:38.690554 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:26:38.692984 | orchestrator | Tuesday 03 June 2025 15:26:38 +0000 (0:00:00.289) 0:00:14.172 ********** 2025-06-03 15:26:38.905035 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:26:38.908849 | orchestrator | 2025-06-03 15:26:38.909535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:38.910252 | orchestrator | Tuesday 03 June 2025 15:26:38 +0000 (0:00:00.220) 0:00:14.392 ********** 2025-06-03 15:26:39.377882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-03 15:26:39.380534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-03 15:26:39.380574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-03 15:26:39.380586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-03 15:26:39.380597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-03 15:26:39.380608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-03 15:26:39.380619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-03 15:26:39.381675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-03 15:26:39.381914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-03 15:26:39.382101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-03 15:26:39.382593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-03 15:26:39.382736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-03 15:26:39.383171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-03 15:26:39.383454 | orchestrator | 2025-06-03 15:26:39.383936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:39.383962 | orchestrator | Tuesday 03 June 2025 15:26:39 +0000 (0:00:00.471) 0:00:14.863 ********** 2025-06-03 15:26:39.578861 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:39.580123 | orchestrator | 2025-06-03 15:26:39.581031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:39.581630 | orchestrator | Tuesday 03 June 2025 15:26:39 +0000 (0:00:00.201) 0:00:15.065 ********** 2025-06-03 15:26:39.789051 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:39.791046 | orchestrator | 2025-06-03 15:26:39.793140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:39.796135 | orchestrator | Tuesday 03 June 2025 15:26:39 +0000 (0:00:00.209) 0:00:15.275 ********** 2025-06-03 15:26:40.002344 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:40.003302 | orchestrator | 2025-06-03 15:26:40.003452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:40.003954 | orchestrator | Tuesday 03 June 2025 15:26:39 +0000 (0:00:00.213) 0:00:15.488 ********** 2025-06-03 15:26:40.191937 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:40.192161 | orchestrator | 2025-06-03 15:26:40.192585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:40.193605 | orchestrator | Tuesday 03 June 2025 15:26:40 +0000 (0:00:00.190) 0:00:15.679 ********** 2025-06-03 15:26:40.805067 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:40.807553 | orchestrator | 2025-06-03 15:26:40.812239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:40.812469 | orchestrator | Tuesday 03 June 2025 15:26:40 +0000 (0:00:00.611) 0:00:16.291 ********** 2025-06-03 15:26:41.005098 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:41.005301 | orchestrator | 2025-06-03 15:26:41.006438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:41.009969 | orchestrator | Tuesday 03 June 2025 15:26:40 +0000 (0:00:00.200) 0:00:16.492 ********** 2025-06-03 15:26:41.212928 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:41.214109 | orchestrator | 2025-06-03 15:26:41.214312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:41.215146 | orchestrator | Tuesday 03 June 2025 15:26:41 +0000 (0:00:00.209) 0:00:16.701 ********** 2025-06-03 15:26:41.419791 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:41.421215 | orchestrator | 2025-06-03 15:26:41.422452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:41.423562 | orchestrator | Tuesday 03 June 2025 15:26:41 +0000 (0:00:00.205) 0:00:16.906 ********** 2025-06-03 15:26:41.844158 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269) 2025-06-03 15:26:41.845406 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269) 2025-06-03 15:26:41.846768 | orchestrator | 2025-06-03 15:26:41.847729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:41.852991 | orchestrator | Tuesday 03 June 2025 15:26:41 +0000 (0:00:00.423) 0:00:17.330 ********** 2025-06-03 15:26:42.266449 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81) 2025-06-03 15:26:42.270410 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81) 2025-06-03 15:26:42.270502 | orchestrator | 2025-06-03 15:26:42.270573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:42.271112 | orchestrator | Tuesday 03 June 2025 15:26:42 +0000 (0:00:00.423) 0:00:17.753 ********** 2025-06-03 15:26:42.673158 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35) 2025-06-03 15:26:42.673666 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35) 2025-06-03 15:26:42.674463 | orchestrator | 2025-06-03 15:26:42.679645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:42.682448 | orchestrator | Tuesday 03 June 2025 15:26:42 +0000 (0:00:00.407) 0:00:18.161 ********** 2025-06-03 15:26:43.083986 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144) 2025-06-03 15:26:43.085058 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144) 2025-06-03 15:26:43.088758 | orchestrator | 2025-06-03 15:26:43.089779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:43.090535 | orchestrator | Tuesday 03 June 2025 15:26:43 +0000 (0:00:00.409) 0:00:18.570 ********** 2025-06-03 15:26:43.432633 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:26:43.435731 | orchestrator | 2025-06-03 15:26:43.436640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:43.437903 | orchestrator | Tuesday 03 June 2025 15:26:43 +0000 (0:00:00.349) 0:00:18.919 ********** 2025-06-03 15:26:43.847841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-03 15:26:43.849530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-03 15:26:43.851290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-03 15:26:43.852254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-03 15:26:43.853546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-03 15:26:43.854674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-03 15:26:43.855129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-03 15:26:43.855964 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-03 15:26:43.856425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-03 15:26:43.856898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-03 15:26:43.857591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-03 15:26:43.858135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-03 15:26:43.858650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-03 15:26:43.859104 | orchestrator | 2025-06-03 15:26:43.859660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:43.860044 | orchestrator | Tuesday 03 June 2025 15:26:43 +0000 (0:00:00.414) 0:00:19.333 ********** 2025-06-03 15:26:44.039273 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:44.039547 | orchestrator | 2025-06-03 15:26:44.040593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:44.041500 | orchestrator | Tuesday 03 June 2025 15:26:44 +0000 (0:00:00.189) 0:00:19.522 ********** 2025-06-03 15:26:44.685123 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:44.685775 | orchestrator | 2025-06-03 15:26:44.686344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:44.688764 | orchestrator | Tuesday 03 June 2025 15:26:44 +0000 (0:00:00.649) 0:00:20.171 ********** 2025-06-03 15:26:44.878698 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:44.878937 | orchestrator | 2025-06-03 15:26:44.880088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:44.881561 | orchestrator | Tuesday 03 June 2025 15:26:44 +0000 (0:00:00.194) 0:00:20.366 ********** 2025-06-03 15:26:45.071019 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:45.071753 | orchestrator | 2025-06-03 15:26:45.075682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:45.075772 | orchestrator | Tuesday 03 June 2025 15:26:45 +0000 (0:00:00.192) 0:00:20.558 ********** 2025-06-03 15:26:45.272420 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:45.272779 | orchestrator | 2025-06-03 15:26:45.274055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:45.275006 | orchestrator | Tuesday 03 June 2025 15:26:45 +0000 (0:00:00.198) 0:00:20.756 ********** 2025-06-03 15:26:45.512133 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:45.512286 | orchestrator | 2025-06-03 15:26:45.512853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:45.513429 | orchestrator | Tuesday 03 June 2025 15:26:45 +0000 (0:00:00.243) 0:00:21.000 ********** 2025-06-03 15:26:45.711211 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:45.712672 | orchestrator | 2025-06-03 15:26:45.713736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:45.714988 | orchestrator | Tuesday 03 June 2025 15:26:45 +0000 (0:00:00.199) 0:00:21.199 ********** 2025-06-03 15:26:45.888199 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:45.888336 | orchestrator | 2025-06-03 15:26:45.889292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:45.889337 | orchestrator | Tuesday 03 June 2025 15:26:45 +0000 (0:00:00.176) 0:00:21.376 ********** 2025-06-03 15:26:46.485685 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-03 15:26:46.485768 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-03 15:26:46.486644 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-03 15:26:46.487459 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-03 15:26:46.489067 | orchestrator | 2025-06-03 15:26:46.490115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:46.491278 | orchestrator | Tuesday 03 June 2025 15:26:46 +0000 (0:00:00.594) 0:00:21.971 ********** 2025-06-03 15:26:46.679816 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:46.680364 | orchestrator | 2025-06-03 15:26:46.681401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:46.682290 | orchestrator | Tuesday 03 June 2025 15:26:46 +0000 (0:00:00.195) 0:00:22.166 ********** 2025-06-03 15:26:46.862628 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:46.862708 | orchestrator | 2025-06-03 15:26:46.863505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:46.864567 | orchestrator | Tuesday 03 June 2025 15:26:46 +0000 (0:00:00.182) 0:00:22.349 ********** 2025-06-03 15:26:47.038764 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:47.039146 | orchestrator | 2025-06-03 15:26:47.040204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:47.041021 | orchestrator | Tuesday 03 June 2025 15:26:47 +0000 (0:00:00.175) 0:00:22.525 ********** 2025-06-03 15:26:47.250005 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:47.251054 | orchestrator | 2025-06-03 15:26:47.252680 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-03 15:26:47.253841 | orchestrator | Tuesday 03 June 2025 15:26:47 +0000 (0:00:00.211) 0:00:22.736 ********** 2025-06-03 15:26:47.546991 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-03 15:26:47.547100 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-03 15:26:47.547142 | orchestrator | 2025-06-03 15:26:47.547230 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-03 15:26:47.547246 | orchestrator | Tuesday 03 June 2025 15:26:47 +0000 (0:00:00.296) 0:00:23.033 ********** 2025-06-03 15:26:47.679442 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:47.679555 | orchestrator | 2025-06-03 15:26:47.679573 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-03 15:26:47.679887 | orchestrator | Tuesday 03 June 2025 15:26:47 +0000 (0:00:00.134) 0:00:23.167 ********** 2025-06-03 15:26:47.819707 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:47.820000 | orchestrator | 2025-06-03 15:26:47.821026 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-03 15:26:47.824234 | orchestrator | Tuesday 03 June 2025 15:26:47 +0000 (0:00:00.139) 0:00:23.307 ********** 2025-06-03 15:26:47.949940 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:47.950163 | orchestrator | 2025-06-03 15:26:47.950508 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-03 15:26:47.950782 | orchestrator | Tuesday 03 June 2025 15:26:47 +0000 (0:00:00.131) 0:00:23.438 ********** 2025-06-03 15:26:48.075953 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:26:48.076058 | orchestrator | 2025-06-03 15:26:48.076074 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-03 15:26:48.076088 | orchestrator | Tuesday 03 June 2025 15:26:48 +0000 (0:00:00.125) 0:00:23.564 ********** 2025-06-03 15:26:48.212334 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8e839e97-cc3d-5431-ae91-f94b997cade9'}}) 2025-06-03 15:26:48.213916 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1191cd60-4b8c-5454-8e42-9818af3c2595'}}) 2025-06-03 15:26:48.214527 | orchestrator | 2025-06-03 15:26:48.214810 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-03 15:26:48.215217 | orchestrator | Tuesday 03 June 2025 15:26:48 +0000 (0:00:00.135) 0:00:23.700 ********** 2025-06-03 15:26:48.342117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8e839e97-cc3d-5431-ae91-f94b997cade9'}})  2025-06-03 15:26:48.342273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1191cd60-4b8c-5454-8e42-9818af3c2595'}})  2025-06-03 15:26:48.343147 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:48.345536 | orchestrator | 2025-06-03 15:26:48.345883 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-03 15:26:48.347156 | orchestrator | Tuesday 03 June 2025 15:26:48 +0000 (0:00:00.130) 0:00:23.830 ********** 2025-06-03 15:26:48.482386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8e839e97-cc3d-5431-ae91-f94b997cade9'}})  2025-06-03 15:26:48.484599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1191cd60-4b8c-5454-8e42-9818af3c2595'}})  2025-06-03 15:26:48.484646 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:48.484660 | orchestrator | 2025-06-03 15:26:48.484673 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-03 15:26:48.484686 | orchestrator | Tuesday 03 June 2025 15:26:48 +0000 (0:00:00.139) 0:00:23.969 ********** 2025-06-03 15:26:48.613879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8e839e97-cc3d-5431-ae91-f94b997cade9'}})  2025-06-03 15:26:48.613977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1191cd60-4b8c-5454-8e42-9818af3c2595'}})  2025-06-03 15:26:48.614147 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:48.614168 | orchestrator | 2025-06-03 15:26:48.614560 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-03 15:26:48.617986 | orchestrator | Tuesday 03 June 2025 15:26:48 +0000 (0:00:00.133) 0:00:24.103 ********** 2025-06-03 15:26:48.749065 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:26:48.750717 | orchestrator | 2025-06-03 15:26:48.750804 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-03 15:26:48.754758 | orchestrator | Tuesday 03 June 2025 15:26:48 +0000 (0:00:00.134) 0:00:24.237 ********** 2025-06-03 15:26:48.888932 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:26:48.890207 | orchestrator | 2025-06-03 15:26:48.892124 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-03 15:26:48.893264 | orchestrator | Tuesday 03 June 2025 15:26:48 +0000 (0:00:00.139) 0:00:24.377 ********** 2025-06-03 15:26:49.009558 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:49.011239 | orchestrator | 2025-06-03 15:26:49.011292 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-03 15:26:49.011794 | orchestrator | Tuesday 03 June 2025 15:26:49 +0000 (0:00:00.120) 0:00:24.497 ********** 2025-06-03 15:26:49.257423 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:49.258612 | orchestrator | 2025-06-03 15:26:49.259696 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-03 15:26:49.261096 | orchestrator | Tuesday 03 June 2025 15:26:49 +0000 (0:00:00.246) 0:00:24.743 ********** 2025-06-03 15:26:49.385226 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:49.385376 | orchestrator | 2025-06-03 15:26:49.387296 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-03 15:26:49.389114 | orchestrator | Tuesday 03 June 2025 15:26:49 +0000 (0:00:00.127) 0:00:24.871 ********** 2025-06-03 15:26:49.518262 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:26:49.521029 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:26:49.524326 | orchestrator |  "sdb": { 2025-06-03 15:26:49.525791 | orchestrator |  "osd_lvm_uuid": "8e839e97-cc3d-5431-ae91-f94b997cade9" 2025-06-03 15:26:49.526465 | orchestrator |  }, 2025-06-03 15:26:49.527459 | orchestrator |  "sdc": { 2025-06-03 15:26:49.528861 | orchestrator |  "osd_lvm_uuid": "1191cd60-4b8c-5454-8e42-9818af3c2595" 2025-06-03 15:26:49.528961 | orchestrator |  } 2025-06-03 15:26:49.529372 | orchestrator |  } 2025-06-03 15:26:49.532093 | orchestrator | } 2025-06-03 15:26:49.532339 | orchestrator | 2025-06-03 15:26:49.532833 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-03 15:26:49.533793 | orchestrator | Tuesday 03 June 2025 15:26:49 +0000 (0:00:00.132) 0:00:25.004 ********** 2025-06-03 15:26:49.650513 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:49.651706 | orchestrator | 2025-06-03 15:26:49.654924 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-03 15:26:49.655925 | orchestrator | Tuesday 03 June 2025 15:26:49 +0000 (0:00:00.133) 0:00:25.138 ********** 2025-06-03 15:26:49.779851 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:49.781130 | orchestrator | 2025-06-03 15:26:49.782117 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-03 15:26:49.782626 | orchestrator | Tuesday 03 June 2025 15:26:49 +0000 (0:00:00.129) 0:00:25.267 ********** 2025-06-03 15:26:49.889367 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:49.893430 | orchestrator | 2025-06-03 15:26:49.893476 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-03 15:26:49.896883 | orchestrator | Tuesday 03 June 2025 15:26:49 +0000 (0:00:00.109) 0:00:25.377 ********** 2025-06-03 15:26:50.150658 | orchestrator | changed: [testbed-node-4] => { 2025-06-03 15:26:50.153513 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-03 15:26:50.153596 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:26:50.153610 | orchestrator |  "sdb": { 2025-06-03 15:26:50.153670 | orchestrator |  "osd_lvm_uuid": "8e839e97-cc3d-5431-ae91-f94b997cade9" 2025-06-03 15:26:50.156500 | orchestrator |  }, 2025-06-03 15:26:50.156570 | orchestrator |  "sdc": { 2025-06-03 15:26:50.156584 | orchestrator |  "osd_lvm_uuid": "1191cd60-4b8c-5454-8e42-9818af3c2595" 2025-06-03 15:26:50.156596 | orchestrator |  } 2025-06-03 15:26:50.156607 | orchestrator |  }, 2025-06-03 15:26:50.156662 | orchestrator |  "lvm_volumes": [ 2025-06-03 15:26:50.157191 | orchestrator |  { 2025-06-03 15:26:50.157968 | orchestrator |  "data": "osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9", 2025-06-03 15:26:50.158355 | orchestrator |  "data_vg": "ceph-8e839e97-cc3d-5431-ae91-f94b997cade9" 2025-06-03 15:26:50.159111 | orchestrator |  }, 2025-06-03 15:26:50.159816 | orchestrator |  { 2025-06-03 15:26:50.160190 | orchestrator |  "data": "osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595", 2025-06-03 15:26:50.160999 | orchestrator |  "data_vg": "ceph-1191cd60-4b8c-5454-8e42-9818af3c2595" 2025-06-03 15:26:50.161335 | orchestrator |  } 2025-06-03 15:26:50.161885 | orchestrator |  ] 2025-06-03 15:26:50.162600 | orchestrator |  } 2025-06-03 15:26:50.165250 | orchestrator | } 2025-06-03 15:26:50.165283 | orchestrator | 2025-06-03 15:26:50.165296 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-03 15:26:50.165334 | orchestrator | Tuesday 03 June 2025 15:26:50 +0000 (0:00:00.260) 0:00:25.637 ********** 2025-06-03 15:26:51.149279 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-03 15:26:51.149374 | orchestrator | 2025-06-03 15:26:51.149527 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-03 15:26:51.149546 | orchestrator | 2025-06-03 15:26:51.150280 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:26:51.150507 | orchestrator | Tuesday 03 June 2025 15:26:51 +0000 (0:00:00.999) 0:00:26.637 ********** 2025-06-03 15:26:51.548854 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-03 15:26:51.550683 | orchestrator | 2025-06-03 15:26:51.552483 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:26:51.552633 | orchestrator | Tuesday 03 June 2025 15:26:51 +0000 (0:00:00.397) 0:00:27.035 ********** 2025-06-03 15:26:52.025660 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:26:52.026606 | orchestrator | 2025-06-03 15:26:52.030908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:52.030990 | orchestrator | Tuesday 03 June 2025 15:26:52 +0000 (0:00:00.478) 0:00:27.514 ********** 2025-06-03 15:26:52.386377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-03 15:26:52.386788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-03 15:26:52.387830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-03 15:26:52.388569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-03 15:26:52.389173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-03 15:26:52.391222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-03 15:26:52.393221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-03 15:26:52.394640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-03 15:26:52.395797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-03 15:26:52.396677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-03 15:26:52.397743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-03 15:26:52.398718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-03 15:26:52.399490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-03 15:26:52.400452 | orchestrator | 2025-06-03 15:26:52.401211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:52.402136 | orchestrator | Tuesday 03 June 2025 15:26:52 +0000 (0:00:00.361) 0:00:27.875 ********** 2025-06-03 15:26:52.569960 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:52.570286 | orchestrator | 2025-06-03 15:26:52.571201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:52.572478 | orchestrator | Tuesday 03 June 2025 15:26:52 +0000 (0:00:00.182) 0:00:28.058 ********** 2025-06-03 15:26:52.743898 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:52.744747 | orchestrator | 2025-06-03 15:26:52.745816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:52.746723 | orchestrator | Tuesday 03 June 2025 15:26:52 +0000 (0:00:00.174) 0:00:28.232 ********** 2025-06-03 15:26:52.923552 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:52.924030 | orchestrator | 2025-06-03 15:26:52.924987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:52.925790 | orchestrator | Tuesday 03 June 2025 15:26:52 +0000 (0:00:00.179) 0:00:28.412 ********** 2025-06-03 15:26:53.122790 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:53.123899 | orchestrator | 2025-06-03 15:26:53.125592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:53.126542 | orchestrator | Tuesday 03 June 2025 15:26:53 +0000 (0:00:00.198) 0:00:28.610 ********** 2025-06-03 15:26:53.302635 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:53.303463 | orchestrator | 2025-06-03 15:26:53.303773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:53.304927 | orchestrator | Tuesday 03 June 2025 15:26:53 +0000 (0:00:00.178) 0:00:28.789 ********** 2025-06-03 15:26:53.496678 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:53.498221 | orchestrator | 2025-06-03 15:26:53.498326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:53.499146 | orchestrator | Tuesday 03 June 2025 15:26:53 +0000 (0:00:00.194) 0:00:28.984 ********** 2025-06-03 15:26:53.682144 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:53.682389 | orchestrator | 2025-06-03 15:26:53.683351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:53.683859 | orchestrator | Tuesday 03 June 2025 15:26:53 +0000 (0:00:00.185) 0:00:29.170 ********** 2025-06-03 15:26:53.863294 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:53.863598 | orchestrator | 2025-06-03 15:26:53.864176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:53.864383 | orchestrator | Tuesday 03 June 2025 15:26:53 +0000 (0:00:00.181) 0:00:29.351 ********** 2025-06-03 15:26:54.363666 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df) 2025-06-03 15:26:54.367086 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df) 2025-06-03 15:26:54.367856 | orchestrator | 2025-06-03 15:26:54.368368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:54.368968 | orchestrator | Tuesday 03 June 2025 15:26:54 +0000 (0:00:00.498) 0:00:29.850 ********** 2025-06-03 15:26:55.021086 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9) 2025-06-03 15:26:55.021219 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9) 2025-06-03 15:26:55.021245 | orchestrator | 2025-06-03 15:26:55.021778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:55.021831 | orchestrator | Tuesday 03 June 2025 15:26:55 +0000 (0:00:00.659) 0:00:30.510 ********** 2025-06-03 15:26:55.450681 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057) 2025-06-03 15:26:55.452147 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057) 2025-06-03 15:26:55.452569 | orchestrator | 2025-06-03 15:26:55.453522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:55.454231 | orchestrator | Tuesday 03 June 2025 15:26:55 +0000 (0:00:00.427) 0:00:30.938 ********** 2025-06-03 15:26:55.828353 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447) 2025-06-03 15:26:55.829228 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447) 2025-06-03 15:26:55.831270 | orchestrator | 2025-06-03 15:26:55.832587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:55.833542 | orchestrator | Tuesday 03 June 2025 15:26:55 +0000 (0:00:00.378) 0:00:31.316 ********** 2025-06-03 15:26:56.130712 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:26:56.132076 | orchestrator | 2025-06-03 15:26:56.132948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:56.133731 | orchestrator | Tuesday 03 June 2025 15:26:56 +0000 (0:00:00.302) 0:00:31.619 ********** 2025-06-03 15:26:56.477231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-03 15:26:56.479206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-03 15:26:56.480535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-03 15:26:56.482087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-03 15:26:56.482780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-03 15:26:56.483286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-03 15:26:56.484186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-03 15:26:56.484921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-03 15:26:56.485820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-03 15:26:56.486135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-03 15:26:56.486874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-03 15:26:56.487157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-03 15:26:56.487997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-03 15:26:56.488337 | orchestrator | 2025-06-03 15:26:56.488672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:56.489275 | orchestrator | Tuesday 03 June 2025 15:26:56 +0000 (0:00:00.344) 0:00:31.964 ********** 2025-06-03 15:26:56.658880 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:56.659079 | orchestrator | 2025-06-03 15:26:56.660605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:56.661164 | orchestrator | Tuesday 03 June 2025 15:26:56 +0000 (0:00:00.182) 0:00:32.146 ********** 2025-06-03 15:26:56.842456 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:56.843711 | orchestrator | 2025-06-03 15:26:56.845196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:56.846171 | orchestrator | Tuesday 03 June 2025 15:26:56 +0000 (0:00:00.183) 0:00:32.330 ********** 2025-06-03 15:26:57.031381 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:57.031656 | orchestrator | 2025-06-03 15:26:57.032503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:57.032879 | orchestrator | Tuesday 03 June 2025 15:26:57 +0000 (0:00:00.189) 0:00:32.519 ********** 2025-06-03 15:26:57.215730 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:57.215826 | orchestrator | 2025-06-03 15:26:57.216510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:57.216537 | orchestrator | Tuesday 03 June 2025 15:26:57 +0000 (0:00:00.184) 0:00:32.704 ********** 2025-06-03 15:26:57.408947 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:57.409159 | orchestrator | 2025-06-03 15:26:57.410543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:57.411281 | orchestrator | Tuesday 03 June 2025 15:26:57 +0000 (0:00:00.190) 0:00:32.895 ********** 2025-06-03 15:26:57.999158 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:58.001164 | orchestrator | 2025-06-03 15:26:58.002091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:58.003543 | orchestrator | Tuesday 03 June 2025 15:26:57 +0000 (0:00:00.592) 0:00:33.487 ********** 2025-06-03 15:26:58.220463 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:58.221885 | orchestrator | 2025-06-03 15:26:58.224308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:58.226884 | orchestrator | Tuesday 03 June 2025 15:26:58 +0000 (0:00:00.219) 0:00:33.706 ********** 2025-06-03 15:26:58.427701 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:58.428198 | orchestrator | 2025-06-03 15:26:58.428892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:58.429454 | orchestrator | Tuesday 03 June 2025 15:26:58 +0000 (0:00:00.204) 0:00:33.911 ********** 2025-06-03 15:26:59.033186 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-03 15:26:59.035118 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-03 15:26:59.035534 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-03 15:26:59.036344 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-03 15:26:59.037506 | orchestrator | 2025-06-03 15:26:59.038102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:59.039007 | orchestrator | Tuesday 03 June 2025 15:26:59 +0000 (0:00:00.609) 0:00:34.521 ********** 2025-06-03 15:26:59.202163 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:59.202655 | orchestrator | 2025-06-03 15:26:59.203791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:59.204355 | orchestrator | Tuesday 03 June 2025 15:26:59 +0000 (0:00:00.168) 0:00:34.689 ********** 2025-06-03 15:26:59.396212 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:59.396434 | orchestrator | 2025-06-03 15:26:59.396962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:59.397185 | orchestrator | Tuesday 03 June 2025 15:26:59 +0000 (0:00:00.194) 0:00:34.884 ********** 2025-06-03 15:26:59.576663 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:59.577645 | orchestrator | 2025-06-03 15:26:59.579712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:26:59.580549 | orchestrator | Tuesday 03 June 2025 15:26:59 +0000 (0:00:00.180) 0:00:35.064 ********** 2025-06-03 15:26:59.752570 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:59.752940 | orchestrator | 2025-06-03 15:26:59.754997 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-03 15:26:59.755728 | orchestrator | Tuesday 03 June 2025 15:26:59 +0000 (0:00:00.175) 0:00:35.240 ********** 2025-06-03 15:26:59.909323 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-03 15:26:59.909555 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-03 15:26:59.911486 | orchestrator | 2025-06-03 15:26:59.912448 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-03 15:26:59.913362 | orchestrator | Tuesday 03 June 2025 15:26:59 +0000 (0:00:00.156) 0:00:35.397 ********** 2025-06-03 15:27:00.038956 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:00.039054 | orchestrator | 2025-06-03 15:27:00.039215 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-03 15:27:00.040377 | orchestrator | Tuesday 03 June 2025 15:27:00 +0000 (0:00:00.130) 0:00:35.527 ********** 2025-06-03 15:27:00.184990 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:00.185666 | orchestrator | 2025-06-03 15:27:00.186145 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-03 15:27:00.187469 | orchestrator | Tuesday 03 June 2025 15:27:00 +0000 (0:00:00.146) 0:00:35.673 ********** 2025-06-03 15:27:00.299935 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:00.300171 | orchestrator | 2025-06-03 15:27:00.300925 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-03 15:27:00.301419 | orchestrator | Tuesday 03 June 2025 15:27:00 +0000 (0:00:00.114) 0:00:35.788 ********** 2025-06-03 15:27:00.558843 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:27:00.559510 | orchestrator | 2025-06-03 15:27:00.560446 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-03 15:27:00.561306 | orchestrator | Tuesday 03 June 2025 15:27:00 +0000 (0:00:00.257) 0:00:36.045 ********** 2025-06-03 15:27:00.710628 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53b632c4-9781-517b-ad8e-3b37c9789a01'}}) 2025-06-03 15:27:00.712085 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'}}) 2025-06-03 15:27:00.712695 | orchestrator | 2025-06-03 15:27:00.713454 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-03 15:27:00.714313 | orchestrator | Tuesday 03 June 2025 15:27:00 +0000 (0:00:00.152) 0:00:36.198 ********** 2025-06-03 15:27:00.848660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53b632c4-9781-517b-ad8e-3b37c9789a01'}})  2025-06-03 15:27:00.850786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'}})  2025-06-03 15:27:00.850814 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:00.851250 | orchestrator | 2025-06-03 15:27:00.851953 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-03 15:27:00.852722 | orchestrator | Tuesday 03 June 2025 15:27:00 +0000 (0:00:00.138) 0:00:36.337 ********** 2025-06-03 15:27:00.988815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53b632c4-9781-517b-ad8e-3b37c9789a01'}})  2025-06-03 15:27:00.988883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'}})  2025-06-03 15:27:00.989898 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:00.990654 | orchestrator | 2025-06-03 15:27:00.991353 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-03 15:27:00.991982 | orchestrator | Tuesday 03 June 2025 15:27:00 +0000 (0:00:00.139) 0:00:36.476 ********** 2025-06-03 15:27:01.128342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53b632c4-9781-517b-ad8e-3b37c9789a01'}})  2025-06-03 15:27:01.128914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'}})  2025-06-03 15:27:01.129153 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:01.129779 | orchestrator | 2025-06-03 15:27:01.130106 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-03 15:27:01.130551 | orchestrator | Tuesday 03 June 2025 15:27:01 +0000 (0:00:00.140) 0:00:36.617 ********** 2025-06-03 15:27:01.259053 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:27:01.260017 | orchestrator | 2025-06-03 15:27:01.261473 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-03 15:27:01.261988 | orchestrator | Tuesday 03 June 2025 15:27:01 +0000 (0:00:00.129) 0:00:36.747 ********** 2025-06-03 15:27:01.381344 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:27:01.381518 | orchestrator | 2025-06-03 15:27:01.382775 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-03 15:27:01.383640 | orchestrator | Tuesday 03 June 2025 15:27:01 +0000 (0:00:00.120) 0:00:36.868 ********** 2025-06-03 15:27:01.510238 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:01.510874 | orchestrator | 2025-06-03 15:27:01.511607 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-03 15:27:01.512247 | orchestrator | Tuesday 03 June 2025 15:27:01 +0000 (0:00:00.130) 0:00:36.999 ********** 2025-06-03 15:27:01.632670 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:01.633845 | orchestrator | 2025-06-03 15:27:01.634900 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-03 15:27:01.635666 | orchestrator | Tuesday 03 June 2025 15:27:01 +0000 (0:00:00.121) 0:00:37.120 ********** 2025-06-03 15:27:01.784619 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:01.785625 | orchestrator | 2025-06-03 15:27:01.786737 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-03 15:27:01.787659 | orchestrator | Tuesday 03 June 2025 15:27:01 +0000 (0:00:00.151) 0:00:37.272 ********** 2025-06-03 15:27:01.959361 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:27:01.960822 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:27:01.961881 | orchestrator |  "sdb": { 2025-06-03 15:27:01.963235 | orchestrator |  "osd_lvm_uuid": "53b632c4-9781-517b-ad8e-3b37c9789a01" 2025-06-03 15:27:01.964075 | orchestrator |  }, 2025-06-03 15:27:01.964688 | orchestrator |  "sdc": { 2025-06-03 15:27:01.965596 | orchestrator |  "osd_lvm_uuid": "ba1ebe02-3aa8-524d-8f69-e3cc70944ba5" 2025-06-03 15:27:01.966075 | orchestrator |  } 2025-06-03 15:27:01.966840 | orchestrator |  } 2025-06-03 15:27:01.967600 | orchestrator | } 2025-06-03 15:27:01.968354 | orchestrator | 2025-06-03 15:27:01.968691 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-03 15:27:01.969155 | orchestrator | Tuesday 03 June 2025 15:27:01 +0000 (0:00:00.174) 0:00:37.446 ********** 2025-06-03 15:27:02.104954 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:02.105114 | orchestrator | 2025-06-03 15:27:02.106860 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-03 15:27:02.108537 | orchestrator | Tuesday 03 June 2025 15:27:02 +0000 (0:00:00.146) 0:00:37.593 ********** 2025-06-03 15:27:02.369518 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:02.369594 | orchestrator | 2025-06-03 15:27:02.371299 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-03 15:27:02.372621 | orchestrator | Tuesday 03 June 2025 15:27:02 +0000 (0:00:00.263) 0:00:37.856 ********** 2025-06-03 15:27:02.488782 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:02.489734 | orchestrator | 2025-06-03 15:27:02.492025 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-03 15:27:02.492585 | orchestrator | Tuesday 03 June 2025 15:27:02 +0000 (0:00:00.120) 0:00:37.977 ********** 2025-06-03 15:27:02.699694 | orchestrator | changed: [testbed-node-5] => { 2025-06-03 15:27:02.699802 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-03 15:27:02.699867 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:27:02.702629 | orchestrator |  "sdb": { 2025-06-03 15:27:02.702660 | orchestrator |  "osd_lvm_uuid": "53b632c4-9781-517b-ad8e-3b37c9789a01" 2025-06-03 15:27:02.702977 | orchestrator |  }, 2025-06-03 15:27:02.703839 | orchestrator |  "sdc": { 2025-06-03 15:27:02.704608 | orchestrator |  "osd_lvm_uuid": "ba1ebe02-3aa8-524d-8f69-e3cc70944ba5" 2025-06-03 15:27:02.705394 | orchestrator |  } 2025-06-03 15:27:02.706911 | orchestrator |  }, 2025-06-03 15:27:02.707034 | orchestrator |  "lvm_volumes": [ 2025-06-03 15:27:02.707728 | orchestrator |  { 2025-06-03 15:27:02.708367 | orchestrator |  "data": "osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01", 2025-06-03 15:27:02.709953 | orchestrator |  "data_vg": "ceph-53b632c4-9781-517b-ad8e-3b37c9789a01" 2025-06-03 15:27:02.710439 | orchestrator |  }, 2025-06-03 15:27:02.711153 | orchestrator |  { 2025-06-03 15:27:02.712246 | orchestrator |  "data": "osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5", 2025-06-03 15:27:02.713052 | orchestrator |  "data_vg": "ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5" 2025-06-03 15:27:02.713143 | orchestrator |  } 2025-06-03 15:27:02.714360 | orchestrator |  ] 2025-06-03 15:27:02.714776 | orchestrator |  } 2025-06-03 15:27:02.717398 | orchestrator | } 2025-06-03 15:27:02.717469 | orchestrator | 2025-06-03 15:27:02.717488 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-03 15:27:02.717507 | orchestrator | Tuesday 03 June 2025 15:27:02 +0000 (0:00:00.207) 0:00:38.185 ********** 2025-06-03 15:27:03.618097 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-03 15:27:03.618529 | orchestrator | 2025-06-03 15:27:03.619079 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:27:03.619255 | orchestrator | 2025-06-03 15:27:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:27:03.619540 | orchestrator | 2025-06-03 15:27:03 | INFO  | Please wait and do not abort execution. 2025-06-03 15:27:03.620586 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-03 15:27:03.621532 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-03 15:27:03.621870 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-03 15:27:03.622525 | orchestrator | 2025-06-03 15:27:03.623166 | orchestrator | 2025-06-03 15:27:03.623704 | orchestrator | 2025-06-03 15:27:03.624502 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:27:03.624934 | orchestrator | Tuesday 03 June 2025 15:27:03 +0000 (0:00:00.920) 0:00:39.105 ********** 2025-06-03 15:27:03.625472 | orchestrator | =============================================================================== 2025-06-03 15:27:03.626087 | orchestrator | Write configuration file ------------------------------------------------ 4.02s 2025-06-03 15:27:03.626561 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2025-06-03 15:27:03.626901 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2025-06-03 15:27:03.627352 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.92s 2025-06-03 15:27:03.627804 | orchestrator | Get initial list of available block devices ----------------------------- 0.91s 2025-06-03 15:27:03.628210 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2025-06-03 15:27:03.628684 | orchestrator | Print configuration data ------------------------------------------------ 0.67s 2025-06-03 15:27:03.629078 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-06-03 15:27:03.629484 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-06-03 15:27:03.630270 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.64s 2025-06-03 15:27:03.630465 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-06-03 15:27:03.632648 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-06-03 15:27:03.632988 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-06-03 15:27:03.633777 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-06-03 15:27:03.634675 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-06-03 15:27:03.634767 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2025-06-03 15:27:03.635536 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.59s 2025-06-03 15:27:03.635572 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-06-03 15:27:03.635777 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.52s 2025-06-03 15:27:03.636147 | orchestrator | Print DB devices -------------------------------------------------------- 0.51s 2025-06-03 15:27:15.965256 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:27:15.965329 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:27:15.965338 | orchestrator | Registering Redlock._release_script 2025-06-03 15:27:16.023697 | orchestrator | 2025-06-03 15:27:16 | INFO  | Task bc82117f-92b6-4ae6-9abe-69f3516036e7 (sync inventory) is running in background. Output coming soon. 2025-06-03 15:27:35.646976 | orchestrator | 2025-06-03 15:27:17 | INFO  | Starting group_vars file reorganization 2025-06-03 15:27:35.647095 | orchestrator | 2025-06-03 15:27:17 | INFO  | Moved 0 file(s) to their respective directories 2025-06-03 15:27:35.647112 | orchestrator | 2025-06-03 15:27:17 | INFO  | Group_vars file reorganization completed 2025-06-03 15:27:35.647124 | orchestrator | 2025-06-03 15:27:19 | INFO  | Starting variable preparation from inventory 2025-06-03 15:27:35.647137 | orchestrator | 2025-06-03 15:27:20 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-03 15:27:35.647148 | orchestrator | 2025-06-03 15:27:20 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-03 15:27:35.647186 | orchestrator | 2025-06-03 15:27:20 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-03 15:27:35.647198 | orchestrator | 2025-06-03 15:27:20 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-03 15:27:35.647210 | orchestrator | 2025-06-03 15:27:20 | INFO  | Variable preparation completed: 2025-06-03 15:27:35.647222 | orchestrator | 2025-06-03 15:27:21 | INFO  | Starting inventory overwrite handling 2025-06-03 15:27:35.647233 | orchestrator | 2025-06-03 15:27:21 | INFO  | Handling group overwrites in 99-overwrite 2025-06-03 15:27:35.647244 | orchestrator | 2025-06-03 15:27:21 | INFO  | Removing group frr:children from 60-generic 2025-06-03 15:27:35.647263 | orchestrator | 2025-06-03 15:27:21 | INFO  | Removing group storage:children from 50-kolla 2025-06-03 15:27:35.647281 | orchestrator | 2025-06-03 15:27:21 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-03 15:27:35.647310 | orchestrator | 2025-06-03 15:27:21 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-03 15:27:35.647329 | orchestrator | 2025-06-03 15:27:21 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-03 15:27:35.647348 | orchestrator | 2025-06-03 15:27:21 | INFO  | Handling group overwrites in 20-roles 2025-06-03 15:27:35.647367 | orchestrator | 2025-06-03 15:27:21 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-03 15:27:35.647387 | orchestrator | 2025-06-03 15:27:21 | INFO  | Removed 6 group(s) in total 2025-06-03 15:27:35.647406 | orchestrator | 2025-06-03 15:27:21 | INFO  | Inventory overwrite handling completed 2025-06-03 15:27:35.647420 | orchestrator | 2025-06-03 15:27:22 | INFO  | Starting merge of inventory files 2025-06-03 15:27:35.647470 | orchestrator | 2025-06-03 15:27:22 | INFO  | Inventory files merged successfully 2025-06-03 15:27:35.647481 | orchestrator | 2025-06-03 15:27:27 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-03 15:27:35.647492 | orchestrator | 2025-06-03 15:27:34 | INFO  | Successfully wrote ClusterShell configuration 2025-06-03 15:27:35.647506 | orchestrator | [master 3bd1130] 2025-06-03-15-27 2025-06-03 15:27:35.647520 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-06-03 15:27:37.623183 | orchestrator | 2025-06-03 15:27:37 | INFO  | Task d99e89ef-e2a4-45c3-9888-55c6865a0792 (ceph-create-lvm-devices) was prepared for execution. 2025-06-03 15:27:37.623279 | orchestrator | 2025-06-03 15:27:37 | INFO  | It takes a moment until task d99e89ef-e2a4-45c3-9888-55c6865a0792 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-03 15:27:41.849517 | orchestrator | 2025-06-03 15:27:41.850488 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-03 15:27:41.852072 | orchestrator | 2025-06-03 15:27:41.853092 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:27:41.853822 | orchestrator | Tuesday 03 June 2025 15:27:41 +0000 (0:00:00.303) 0:00:00.303 ********** 2025-06-03 15:27:42.100139 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 15:27:42.100663 | orchestrator | 2025-06-03 15:27:42.101620 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:27:42.102591 | orchestrator | Tuesday 03 June 2025 15:27:42 +0000 (0:00:00.251) 0:00:00.555 ********** 2025-06-03 15:27:42.335679 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:27:42.337664 | orchestrator | 2025-06-03 15:27:42.337749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:42.338570 | orchestrator | Tuesday 03 June 2025 15:27:42 +0000 (0:00:00.234) 0:00:00.790 ********** 2025-06-03 15:27:42.758909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-03 15:27:42.759275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-03 15:27:42.760624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-03 15:27:42.761868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-03 15:27:42.763523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-03 15:27:42.764799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-03 15:27:42.765814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-03 15:27:42.766854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-03 15:27:42.768186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-03 15:27:42.769006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-03 15:27:42.769745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-03 15:27:42.770296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-03 15:27:42.771325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-03 15:27:42.771729 | orchestrator | 2025-06-03 15:27:42.772880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:42.773190 | orchestrator | Tuesday 03 June 2025 15:27:42 +0000 (0:00:00.423) 0:00:01.213 ********** 2025-06-03 15:27:43.235701 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:43.236395 | orchestrator | 2025-06-03 15:27:43.237650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:43.238831 | orchestrator | Tuesday 03 June 2025 15:27:43 +0000 (0:00:00.476) 0:00:01.690 ********** 2025-06-03 15:27:43.434779 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:43.435640 | orchestrator | 2025-06-03 15:27:43.436787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:43.439217 | orchestrator | Tuesday 03 June 2025 15:27:43 +0000 (0:00:00.200) 0:00:01.890 ********** 2025-06-03 15:27:43.650268 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:43.650567 | orchestrator | 2025-06-03 15:27:43.651101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:43.651659 | orchestrator | Tuesday 03 June 2025 15:27:43 +0000 (0:00:00.215) 0:00:02.106 ********** 2025-06-03 15:27:43.853360 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:43.853611 | orchestrator | 2025-06-03 15:27:43.853965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:43.855021 | orchestrator | Tuesday 03 June 2025 15:27:43 +0000 (0:00:00.202) 0:00:02.308 ********** 2025-06-03 15:27:44.083885 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:44.085049 | orchestrator | 2025-06-03 15:27:44.086522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:44.086865 | orchestrator | Tuesday 03 June 2025 15:27:44 +0000 (0:00:00.229) 0:00:02.537 ********** 2025-06-03 15:27:44.287523 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:44.288396 | orchestrator | 2025-06-03 15:27:44.290934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:44.290967 | orchestrator | Tuesday 03 June 2025 15:27:44 +0000 (0:00:00.204) 0:00:02.741 ********** 2025-06-03 15:27:44.499297 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:44.499722 | orchestrator | 2025-06-03 15:27:44.501187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:44.502797 | orchestrator | Tuesday 03 June 2025 15:27:44 +0000 (0:00:00.213) 0:00:02.955 ********** 2025-06-03 15:27:44.708707 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:44.709913 | orchestrator | 2025-06-03 15:27:44.710251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:44.711812 | orchestrator | Tuesday 03 June 2025 15:27:44 +0000 (0:00:00.209) 0:00:03.164 ********** 2025-06-03 15:27:45.167276 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509) 2025-06-03 15:27:45.168974 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509) 2025-06-03 15:27:45.169352 | orchestrator | 2025-06-03 15:27:45.170532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:45.171527 | orchestrator | Tuesday 03 June 2025 15:27:45 +0000 (0:00:00.456) 0:00:03.621 ********** 2025-06-03 15:27:45.579041 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e) 2025-06-03 15:27:45.579963 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e) 2025-06-03 15:27:45.581246 | orchestrator | 2025-06-03 15:27:45.582609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:45.583911 | orchestrator | Tuesday 03 June 2025 15:27:45 +0000 (0:00:00.412) 0:00:04.034 ********** 2025-06-03 15:27:46.248410 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2) 2025-06-03 15:27:46.251681 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2) 2025-06-03 15:27:46.252332 | orchestrator | 2025-06-03 15:27:46.253639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:46.254572 | orchestrator | Tuesday 03 June 2025 15:27:46 +0000 (0:00:00.667) 0:00:04.702 ********** 2025-06-03 15:27:46.907364 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5) 2025-06-03 15:27:46.908386 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5) 2025-06-03 15:27:46.908521 | orchestrator | 2025-06-03 15:27:46.908992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:46.909505 | orchestrator | Tuesday 03 June 2025 15:27:46 +0000 (0:00:00.661) 0:00:05.364 ********** 2025-06-03 15:27:47.650230 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:27:47.651624 | orchestrator | 2025-06-03 15:27:47.652068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:47.653370 | orchestrator | Tuesday 03 June 2025 15:27:47 +0000 (0:00:00.740) 0:00:06.104 ********** 2025-06-03 15:27:48.078846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-03 15:27:48.079471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-03 15:27:48.080209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-03 15:27:48.080701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-03 15:27:48.082795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-03 15:27:48.083311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-03 15:27:48.083935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-03 15:27:48.084286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-03 15:27:48.084721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-03 15:27:48.086610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-03 15:27:48.087259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-03 15:27:48.087863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-03 15:27:48.088574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-03 15:27:48.088962 | orchestrator | 2025-06-03 15:27:48.089635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:48.090102 | orchestrator | Tuesday 03 June 2025 15:27:48 +0000 (0:00:00.429) 0:00:06.533 ********** 2025-06-03 15:27:48.282222 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:48.282323 | orchestrator | 2025-06-03 15:27:48.282472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:48.282559 | orchestrator | Tuesday 03 June 2025 15:27:48 +0000 (0:00:00.203) 0:00:06.737 ********** 2025-06-03 15:27:48.495654 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:48.495801 | orchestrator | 2025-06-03 15:27:48.496874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:48.497681 | orchestrator | Tuesday 03 June 2025 15:27:48 +0000 (0:00:00.213) 0:00:06.950 ********** 2025-06-03 15:27:48.744927 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:48.745981 | orchestrator | 2025-06-03 15:27:48.747569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:48.747815 | orchestrator | Tuesday 03 June 2025 15:27:48 +0000 (0:00:00.247) 0:00:07.198 ********** 2025-06-03 15:27:48.944245 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:48.945177 | orchestrator | 2025-06-03 15:27:48.945914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:48.947100 | orchestrator | Tuesday 03 June 2025 15:27:48 +0000 (0:00:00.201) 0:00:07.399 ********** 2025-06-03 15:27:49.170879 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:49.171571 | orchestrator | 2025-06-03 15:27:49.172353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:49.173829 | orchestrator | Tuesday 03 June 2025 15:27:49 +0000 (0:00:00.226) 0:00:07.626 ********** 2025-06-03 15:27:49.373746 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:49.374680 | orchestrator | 2025-06-03 15:27:49.375872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:49.376917 | orchestrator | Tuesday 03 June 2025 15:27:49 +0000 (0:00:00.202) 0:00:07.829 ********** 2025-06-03 15:27:49.581217 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:49.584088 | orchestrator | 2025-06-03 15:27:49.584604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:49.586641 | orchestrator | Tuesday 03 June 2025 15:27:49 +0000 (0:00:00.208) 0:00:08.037 ********** 2025-06-03 15:27:49.780612 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:49.780755 | orchestrator | 2025-06-03 15:27:49.780832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:49.781605 | orchestrator | Tuesday 03 June 2025 15:27:49 +0000 (0:00:00.197) 0:00:08.235 ********** 2025-06-03 15:27:50.890083 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-03 15:27:50.890698 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-03 15:27:50.891238 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-03 15:27:50.891817 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-03 15:27:50.892326 | orchestrator | 2025-06-03 15:27:50.892826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:50.893255 | orchestrator | Tuesday 03 June 2025 15:27:50 +0000 (0:00:01.112) 0:00:09.347 ********** 2025-06-03 15:27:51.084854 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:51.085632 | orchestrator | 2025-06-03 15:27:51.085903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:51.086972 | orchestrator | Tuesday 03 June 2025 15:27:51 +0000 (0:00:00.191) 0:00:09.539 ********** 2025-06-03 15:27:51.295318 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:51.296111 | orchestrator | 2025-06-03 15:27:51.297527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:51.299640 | orchestrator | Tuesday 03 June 2025 15:27:51 +0000 (0:00:00.211) 0:00:09.751 ********** 2025-06-03 15:27:51.492614 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:51.493228 | orchestrator | 2025-06-03 15:27:51.493643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:51.494343 | orchestrator | Tuesday 03 June 2025 15:27:51 +0000 (0:00:00.195) 0:00:09.946 ********** 2025-06-03 15:27:51.686311 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:51.687535 | orchestrator | 2025-06-03 15:27:51.688214 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-03 15:27:51.689279 | orchestrator | Tuesday 03 June 2025 15:27:51 +0000 (0:00:00.195) 0:00:10.142 ********** 2025-06-03 15:27:51.823496 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:51.823907 | orchestrator | 2025-06-03 15:27:51.825310 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-03 15:27:51.826680 | orchestrator | Tuesday 03 June 2025 15:27:51 +0000 (0:00:00.135) 0:00:10.278 ********** 2025-06-03 15:27:52.041399 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a5276575-f764-5428-894d-d125091c496f'}}) 2025-06-03 15:27:52.042119 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6a443cc3-e60d-5588-869b-39e93dfe07d6'}}) 2025-06-03 15:27:52.042296 | orchestrator | 2025-06-03 15:27:52.043043 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-03 15:27:52.044854 | orchestrator | Tuesday 03 June 2025 15:27:52 +0000 (0:00:00.219) 0:00:10.497 ********** 2025-06-03 15:27:53.978940 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'}) 2025-06-03 15:27:53.979081 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'}) 2025-06-03 15:27:53.980103 | orchestrator | 2025-06-03 15:27:53.981082 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-03 15:27:53.981897 | orchestrator | Tuesday 03 June 2025 15:27:53 +0000 (0:00:01.933) 0:00:12.431 ********** 2025-06-03 15:27:54.119727 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:27:54.120035 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:27:54.121026 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:54.122120 | orchestrator | 2025-06-03 15:27:54.122892 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-03 15:27:54.123496 | orchestrator | Tuesday 03 June 2025 15:27:54 +0000 (0:00:00.142) 0:00:12.573 ********** 2025-06-03 15:27:55.594994 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'}) 2025-06-03 15:27:55.595101 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'}) 2025-06-03 15:27:55.596297 | orchestrator | 2025-06-03 15:27:55.597336 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-03 15:27:55.598139 | orchestrator | Tuesday 03 June 2025 15:27:55 +0000 (0:00:01.475) 0:00:14.049 ********** 2025-06-03 15:27:55.744873 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:27:55.746266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:27:55.747301 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:55.748849 | orchestrator | 2025-06-03 15:27:55.749210 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-03 15:27:55.749895 | orchestrator | Tuesday 03 June 2025 15:27:55 +0000 (0:00:00.149) 0:00:14.199 ********** 2025-06-03 15:27:55.880078 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:55.882203 | orchestrator | 2025-06-03 15:27:55.883085 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-03 15:27:55.884673 | orchestrator | Tuesday 03 June 2025 15:27:55 +0000 (0:00:00.137) 0:00:14.336 ********** 2025-06-03 15:27:56.244880 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:27:56.245842 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:27:56.246569 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:56.247275 | orchestrator | 2025-06-03 15:27:56.248100 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-03 15:27:56.248783 | orchestrator | Tuesday 03 June 2025 15:27:56 +0000 (0:00:00.360) 0:00:14.697 ********** 2025-06-03 15:27:56.378576 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:56.378704 | orchestrator | 2025-06-03 15:27:56.379558 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-03 15:27:56.380182 | orchestrator | Tuesday 03 June 2025 15:27:56 +0000 (0:00:00.136) 0:00:14.834 ********** 2025-06-03 15:27:56.539200 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:27:56.542351 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:27:56.542942 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:56.545763 | orchestrator | 2025-06-03 15:27:56.545895 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-03 15:27:56.546670 | orchestrator | Tuesday 03 June 2025 15:27:56 +0000 (0:00:00.160) 0:00:14.994 ********** 2025-06-03 15:27:56.671021 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:56.671988 | orchestrator | 2025-06-03 15:27:56.673060 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-03 15:27:56.674547 | orchestrator | Tuesday 03 June 2025 15:27:56 +0000 (0:00:00.132) 0:00:15.126 ********** 2025-06-03 15:27:56.828804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:27:56.829557 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:27:56.830932 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:56.833228 | orchestrator | 2025-06-03 15:27:56.833738 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-03 15:27:56.834766 | orchestrator | Tuesday 03 June 2025 15:27:56 +0000 (0:00:00.158) 0:00:15.285 ********** 2025-06-03 15:27:56.990835 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:27:56.990922 | orchestrator | 2025-06-03 15:27:56.991588 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-03 15:27:56.992389 | orchestrator | Tuesday 03 June 2025 15:27:56 +0000 (0:00:00.160) 0:00:15.446 ********** 2025-06-03 15:27:57.148748 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:27:57.151154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:27:57.153046 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:57.153766 | orchestrator | 2025-06-03 15:27:57.154492 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-03 15:27:57.156506 | orchestrator | Tuesday 03 June 2025 15:27:57 +0000 (0:00:00.158) 0:00:15.604 ********** 2025-06-03 15:27:57.303945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:27:57.304133 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:27:57.305762 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:57.305937 | orchestrator | 2025-06-03 15:27:57.306389 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-03 15:27:57.307301 | orchestrator | Tuesday 03 June 2025 15:27:57 +0000 (0:00:00.155) 0:00:15.759 ********** 2025-06-03 15:27:57.448962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:27:57.449900 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:27:57.450414 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:57.451610 | orchestrator | 2025-06-03 15:27:57.452495 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-03 15:27:57.453281 | orchestrator | Tuesday 03 June 2025 15:27:57 +0000 (0:00:00.146) 0:00:15.905 ********** 2025-06-03 15:27:57.595586 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:57.596713 | orchestrator | 2025-06-03 15:27:57.598267 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-03 15:27:57.599625 | orchestrator | Tuesday 03 June 2025 15:27:57 +0000 (0:00:00.146) 0:00:16.051 ********** 2025-06-03 15:27:57.732646 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:57.733505 | orchestrator | 2025-06-03 15:27:57.735740 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-03 15:27:57.736299 | orchestrator | Tuesday 03 June 2025 15:27:57 +0000 (0:00:00.136) 0:00:16.188 ********** 2025-06-03 15:27:57.867562 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:57.869699 | orchestrator | 2025-06-03 15:27:57.870429 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-03 15:27:57.871878 | orchestrator | Tuesday 03 June 2025 15:27:57 +0000 (0:00:00.134) 0:00:16.322 ********** 2025-06-03 15:27:58.214241 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:27:58.215547 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-03 15:27:58.217090 | orchestrator | } 2025-06-03 15:27:58.218364 | orchestrator | 2025-06-03 15:27:58.219531 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-03 15:27:58.220100 | orchestrator | Tuesday 03 June 2025 15:27:58 +0000 (0:00:00.346) 0:00:16.669 ********** 2025-06-03 15:27:58.355116 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:27:58.355729 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-03 15:27:58.356785 | orchestrator | } 2025-06-03 15:27:58.357641 | orchestrator | 2025-06-03 15:27:58.359183 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-03 15:27:58.359523 | orchestrator | Tuesday 03 June 2025 15:27:58 +0000 (0:00:00.142) 0:00:16.811 ********** 2025-06-03 15:27:58.485892 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:27:58.487094 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-03 15:27:58.489897 | orchestrator | } 2025-06-03 15:27:58.490334 | orchestrator | 2025-06-03 15:27:58.490572 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-03 15:27:58.490989 | orchestrator | Tuesday 03 June 2025 15:27:58 +0000 (0:00:00.131) 0:00:16.942 ********** 2025-06-03 15:27:59.099725 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:27:59.099834 | orchestrator | 2025-06-03 15:27:59.099914 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-03 15:27:59.100681 | orchestrator | Tuesday 03 June 2025 15:27:59 +0000 (0:00:00.613) 0:00:17.556 ********** 2025-06-03 15:27:59.601250 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:27:59.601391 | orchestrator | 2025-06-03 15:27:59.601890 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-03 15:27:59.602512 | orchestrator | Tuesday 03 June 2025 15:27:59 +0000 (0:00:00.498) 0:00:18.055 ********** 2025-06-03 15:28:00.088577 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:00.089028 | orchestrator | 2025-06-03 15:28:00.089859 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-03 15:28:00.090608 | orchestrator | Tuesday 03 June 2025 15:28:00 +0000 (0:00:00.490) 0:00:18.545 ********** 2025-06-03 15:28:00.227724 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:00.228203 | orchestrator | 2025-06-03 15:28:00.229966 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-03 15:28:00.230875 | orchestrator | Tuesday 03 June 2025 15:28:00 +0000 (0:00:00.138) 0:00:18.684 ********** 2025-06-03 15:28:00.323508 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:00.323821 | orchestrator | 2025-06-03 15:28:00.324625 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-03 15:28:00.325918 | orchestrator | Tuesday 03 June 2025 15:28:00 +0000 (0:00:00.096) 0:00:18.780 ********** 2025-06-03 15:28:00.431226 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:00.433437 | orchestrator | 2025-06-03 15:28:00.434727 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-03 15:28:00.435419 | orchestrator | Tuesday 03 June 2025 15:28:00 +0000 (0:00:00.103) 0:00:18.884 ********** 2025-06-03 15:28:00.561231 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:28:00.562501 | orchestrator |  "vgs_report": { 2025-06-03 15:28:00.562795 | orchestrator |  "vg": [] 2025-06-03 15:28:00.564312 | orchestrator |  } 2025-06-03 15:28:00.565171 | orchestrator | } 2025-06-03 15:28:00.565822 | orchestrator | 2025-06-03 15:28:00.566549 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-03 15:28:00.567212 | orchestrator | Tuesday 03 June 2025 15:28:00 +0000 (0:00:00.132) 0:00:19.017 ********** 2025-06-03 15:28:00.702780 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:00.703338 | orchestrator | 2025-06-03 15:28:00.704630 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-03 15:28:00.705329 | orchestrator | Tuesday 03 June 2025 15:28:00 +0000 (0:00:00.141) 0:00:19.159 ********** 2025-06-03 15:28:00.819855 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:00.820041 | orchestrator | 2025-06-03 15:28:00.821250 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-03 15:28:00.821511 | orchestrator | Tuesday 03 June 2025 15:28:00 +0000 (0:00:00.116) 0:00:19.275 ********** 2025-06-03 15:28:01.060733 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:01.060835 | orchestrator | 2025-06-03 15:28:01.060846 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-03 15:28:01.060856 | orchestrator | Tuesday 03 June 2025 15:28:01 +0000 (0:00:00.241) 0:00:19.517 ********** 2025-06-03 15:28:01.185287 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:01.185394 | orchestrator | 2025-06-03 15:28:01.186563 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-03 15:28:01.187795 | orchestrator | Tuesday 03 June 2025 15:28:01 +0000 (0:00:00.124) 0:00:19.642 ********** 2025-06-03 15:28:01.314301 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:01.314900 | orchestrator | 2025-06-03 15:28:01.316094 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-03 15:28:01.316886 | orchestrator | Tuesday 03 June 2025 15:28:01 +0000 (0:00:00.127) 0:00:19.769 ********** 2025-06-03 15:28:01.443176 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:01.443378 | orchestrator | 2025-06-03 15:28:01.444416 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-03 15:28:01.445124 | orchestrator | Tuesday 03 June 2025 15:28:01 +0000 (0:00:00.130) 0:00:19.899 ********** 2025-06-03 15:28:01.588351 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:01.588544 | orchestrator | 2025-06-03 15:28:01.589075 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-03 15:28:01.591634 | orchestrator | Tuesday 03 June 2025 15:28:01 +0000 (0:00:00.145) 0:00:20.044 ********** 2025-06-03 15:28:01.717657 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:01.718686 | orchestrator | 2025-06-03 15:28:01.719355 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-03 15:28:01.720442 | orchestrator | Tuesday 03 June 2025 15:28:01 +0000 (0:00:00.129) 0:00:20.174 ********** 2025-06-03 15:28:01.862953 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:01.863857 | orchestrator | 2025-06-03 15:28:01.864890 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-03 15:28:01.866124 | orchestrator | Tuesday 03 June 2025 15:28:01 +0000 (0:00:00.145) 0:00:20.319 ********** 2025-06-03 15:28:01.995163 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:01.995746 | orchestrator | 2025-06-03 15:28:01.997094 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-03 15:28:01.998622 | orchestrator | Tuesday 03 June 2025 15:28:01 +0000 (0:00:00.131) 0:00:20.451 ********** 2025-06-03 15:28:02.115641 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:02.115826 | orchestrator | 2025-06-03 15:28:02.116169 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-03 15:28:02.116677 | orchestrator | Tuesday 03 June 2025 15:28:02 +0000 (0:00:00.121) 0:00:20.572 ********** 2025-06-03 15:28:02.238563 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:02.240031 | orchestrator | 2025-06-03 15:28:02.240872 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-03 15:28:02.242672 | orchestrator | Tuesday 03 June 2025 15:28:02 +0000 (0:00:00.122) 0:00:20.695 ********** 2025-06-03 15:28:02.363082 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:02.363616 | orchestrator | 2025-06-03 15:28:02.364644 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-03 15:28:02.365721 | orchestrator | Tuesday 03 June 2025 15:28:02 +0000 (0:00:00.124) 0:00:20.819 ********** 2025-06-03 15:28:02.478635 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:02.479329 | orchestrator | 2025-06-03 15:28:02.480389 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-03 15:28:02.481066 | orchestrator | Tuesday 03 June 2025 15:28:02 +0000 (0:00:00.115) 0:00:20.934 ********** 2025-06-03 15:28:02.619366 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:02.619598 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:02.620752 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:02.621364 | orchestrator | 2025-06-03 15:28:02.622119 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-03 15:28:02.622699 | orchestrator | Tuesday 03 June 2025 15:28:02 +0000 (0:00:00.140) 0:00:21.075 ********** 2025-06-03 15:28:02.877221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:02.878666 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:02.878954 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:02.879854 | orchestrator | 2025-06-03 15:28:02.880764 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-03 15:28:02.881607 | orchestrator | Tuesday 03 June 2025 15:28:02 +0000 (0:00:00.258) 0:00:21.334 ********** 2025-06-03 15:28:03.016898 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:03.017961 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:03.019186 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:03.019632 | orchestrator | 2025-06-03 15:28:03.020033 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-03 15:28:03.020544 | orchestrator | Tuesday 03 June 2025 15:28:03 +0000 (0:00:00.139) 0:00:21.473 ********** 2025-06-03 15:28:03.155597 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:03.155994 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:03.156644 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:03.157767 | orchestrator | 2025-06-03 15:28:03.158733 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-03 15:28:03.159468 | orchestrator | Tuesday 03 June 2025 15:28:03 +0000 (0:00:00.137) 0:00:21.611 ********** 2025-06-03 15:28:03.297310 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:03.297954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:03.298827 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:03.299494 | orchestrator | 2025-06-03 15:28:03.300489 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-03 15:28:03.301717 | orchestrator | Tuesday 03 June 2025 15:28:03 +0000 (0:00:00.141) 0:00:21.752 ********** 2025-06-03 15:28:03.456095 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:03.456312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:03.457870 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:03.458652 | orchestrator | 2025-06-03 15:28:03.459529 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-03 15:28:03.460137 | orchestrator | Tuesday 03 June 2025 15:28:03 +0000 (0:00:00.159) 0:00:21.912 ********** 2025-06-03 15:28:03.598302 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:03.598403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:03.598896 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:03.599265 | orchestrator | 2025-06-03 15:28:03.601423 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-03 15:28:03.601916 | orchestrator | Tuesday 03 June 2025 15:28:03 +0000 (0:00:00.141) 0:00:22.054 ********** 2025-06-03 15:28:03.739505 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:03.739736 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:03.740798 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:03.741711 | orchestrator | 2025-06-03 15:28:03.742497 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-03 15:28:03.743132 | orchestrator | Tuesday 03 June 2025 15:28:03 +0000 (0:00:00.141) 0:00:22.195 ********** 2025-06-03 15:28:04.227715 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:04.227899 | orchestrator | 2025-06-03 15:28:04.229148 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-03 15:28:04.230129 | orchestrator | Tuesday 03 June 2025 15:28:04 +0000 (0:00:00.486) 0:00:22.682 ********** 2025-06-03 15:28:04.746247 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:04.746749 | orchestrator | 2025-06-03 15:28:04.747465 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-03 15:28:04.748760 | orchestrator | Tuesday 03 June 2025 15:28:04 +0000 (0:00:00.517) 0:00:23.199 ********** 2025-06-03 15:28:04.883173 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:04.883979 | orchestrator | 2025-06-03 15:28:04.884648 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-03 15:28:04.886907 | orchestrator | Tuesday 03 June 2025 15:28:04 +0000 (0:00:00.139) 0:00:23.339 ********** 2025-06-03 15:28:05.068828 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'vg_name': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'}) 2025-06-03 15:28:05.069920 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'vg_name': 'ceph-a5276575-f764-5428-894d-d125091c496f'}) 2025-06-03 15:28:05.070224 | orchestrator | 2025-06-03 15:28:05.071108 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-03 15:28:05.071861 | orchestrator | Tuesday 03 June 2025 15:28:05 +0000 (0:00:00.185) 0:00:23.524 ********** 2025-06-03 15:28:05.217564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:05.217656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:05.218252 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:05.218963 | orchestrator | 2025-06-03 15:28:05.219674 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-03 15:28:05.220215 | orchestrator | Tuesday 03 June 2025 15:28:05 +0000 (0:00:00.148) 0:00:23.673 ********** 2025-06-03 15:28:05.563656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:05.564847 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:05.565614 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:05.566807 | orchestrator | 2025-06-03 15:28:05.567692 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-03 15:28:05.567927 | orchestrator | Tuesday 03 June 2025 15:28:05 +0000 (0:00:00.344) 0:00:24.018 ********** 2025-06-03 15:28:05.712737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'})  2025-06-03 15:28:05.712912 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'})  2025-06-03 15:28:05.713817 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:05.715442 | orchestrator | 2025-06-03 15:28:05.717858 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-03 15:28:05.718860 | orchestrator | Tuesday 03 June 2025 15:28:05 +0000 (0:00:00.149) 0:00:24.167 ********** 2025-06-03 15:28:05.994837 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:28:05.994943 | orchestrator |  "lvm_report": { 2025-06-03 15:28:05.996999 | orchestrator |  "lv": [ 2025-06-03 15:28:05.997033 | orchestrator |  { 2025-06-03 15:28:05.997768 | orchestrator |  "lv_name": "osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6", 2025-06-03 15:28:05.999679 | orchestrator |  "vg_name": "ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6" 2025-06-03 15:28:05.999766 | orchestrator |  }, 2025-06-03 15:28:06.000337 | orchestrator |  { 2025-06-03 15:28:06.001226 | orchestrator |  "lv_name": "osd-block-a5276575-f764-5428-894d-d125091c496f", 2025-06-03 15:28:06.001846 | orchestrator |  "vg_name": "ceph-a5276575-f764-5428-894d-d125091c496f" 2025-06-03 15:28:06.002260 | orchestrator |  } 2025-06-03 15:28:06.002952 | orchestrator |  ], 2025-06-03 15:28:06.002996 | orchestrator |  "pv": [ 2025-06-03 15:28:06.003392 | orchestrator |  { 2025-06-03 15:28:06.004047 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-03 15:28:06.004925 | orchestrator |  "vg_name": "ceph-a5276575-f764-5428-894d-d125091c496f" 2025-06-03 15:28:06.005580 | orchestrator |  }, 2025-06-03 15:28:06.006102 | orchestrator |  { 2025-06-03 15:28:06.006601 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-03 15:28:06.007292 | orchestrator |  "vg_name": "ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6" 2025-06-03 15:28:06.007906 | orchestrator |  } 2025-06-03 15:28:06.008720 | orchestrator |  ] 2025-06-03 15:28:06.008990 | orchestrator |  } 2025-06-03 15:28:06.009555 | orchestrator | } 2025-06-03 15:28:06.009903 | orchestrator | 2025-06-03 15:28:06.010415 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-03 15:28:06.010797 | orchestrator | 2025-06-03 15:28:06.011170 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:28:06.011873 | orchestrator | Tuesday 03 June 2025 15:28:05 +0000 (0:00:00.282) 0:00:24.449 ********** 2025-06-03 15:28:06.235074 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-03 15:28:06.236411 | orchestrator | 2025-06-03 15:28:06.237821 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:28:06.238597 | orchestrator | Tuesday 03 June 2025 15:28:06 +0000 (0:00:00.240) 0:00:24.690 ********** 2025-06-03 15:28:06.471083 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:06.472106 | orchestrator | 2025-06-03 15:28:06.473721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:06.474582 | orchestrator | Tuesday 03 June 2025 15:28:06 +0000 (0:00:00.235) 0:00:24.925 ********** 2025-06-03 15:28:06.900667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-03 15:28:06.901684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-03 15:28:06.902580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-03 15:28:06.903603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-03 15:28:06.904709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-03 15:28:06.905302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-03 15:28:06.906348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-03 15:28:06.906860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-03 15:28:06.907408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-03 15:28:06.907770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-03 15:28:06.908240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-03 15:28:06.909652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-03 15:28:06.909805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-03 15:28:06.911049 | orchestrator | 2025-06-03 15:28:06.911099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:06.911585 | orchestrator | Tuesday 03 June 2025 15:28:06 +0000 (0:00:00.430) 0:00:25.355 ********** 2025-06-03 15:28:07.099404 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:07.100002 | orchestrator | 2025-06-03 15:28:07.100907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:07.101847 | orchestrator | Tuesday 03 June 2025 15:28:07 +0000 (0:00:00.198) 0:00:25.554 ********** 2025-06-03 15:28:07.300616 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:07.301625 | orchestrator | 2025-06-03 15:28:07.302307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:07.303391 | orchestrator | Tuesday 03 June 2025 15:28:07 +0000 (0:00:00.201) 0:00:25.755 ********** 2025-06-03 15:28:07.488803 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:07.489218 | orchestrator | 2025-06-03 15:28:07.489609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:07.490290 | orchestrator | Tuesday 03 June 2025 15:28:07 +0000 (0:00:00.188) 0:00:25.944 ********** 2025-06-03 15:28:08.034512 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:08.035538 | orchestrator | 2025-06-03 15:28:08.036189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:08.038306 | orchestrator | Tuesday 03 June 2025 15:28:08 +0000 (0:00:00.546) 0:00:26.490 ********** 2025-06-03 15:28:08.254399 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:08.255399 | orchestrator | 2025-06-03 15:28:08.256415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:08.257261 | orchestrator | Tuesday 03 June 2025 15:28:08 +0000 (0:00:00.218) 0:00:26.709 ********** 2025-06-03 15:28:08.470658 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:08.475042 | orchestrator | 2025-06-03 15:28:08.475974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:08.478884 | orchestrator | Tuesday 03 June 2025 15:28:08 +0000 (0:00:00.214) 0:00:26.924 ********** 2025-06-03 15:28:08.675690 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:08.676237 | orchestrator | 2025-06-03 15:28:08.676645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:08.677408 | orchestrator | Tuesday 03 June 2025 15:28:08 +0000 (0:00:00.206) 0:00:27.131 ********** 2025-06-03 15:28:08.871007 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:08.871688 | orchestrator | 2025-06-03 15:28:08.872567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:08.874499 | orchestrator | Tuesday 03 June 2025 15:28:08 +0000 (0:00:00.195) 0:00:27.326 ********** 2025-06-03 15:28:09.317224 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269) 2025-06-03 15:28:09.317349 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269) 2025-06-03 15:28:09.318214 | orchestrator | 2025-06-03 15:28:09.319416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:09.319946 | orchestrator | Tuesday 03 June 2025 15:28:09 +0000 (0:00:00.445) 0:00:27.772 ********** 2025-06-03 15:28:09.745352 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81) 2025-06-03 15:28:09.746686 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81) 2025-06-03 15:28:09.746708 | orchestrator | 2025-06-03 15:28:09.746718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:09.746725 | orchestrator | Tuesday 03 June 2025 15:28:09 +0000 (0:00:00.428) 0:00:28.200 ********** 2025-06-03 15:28:10.175178 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35) 2025-06-03 15:28:10.175374 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35) 2025-06-03 15:28:10.176007 | orchestrator | 2025-06-03 15:28:10.176916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:10.177713 | orchestrator | Tuesday 03 June 2025 15:28:10 +0000 (0:00:00.431) 0:00:28.631 ********** 2025-06-03 15:28:10.598427 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144) 2025-06-03 15:28:10.598548 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144) 2025-06-03 15:28:10.599274 | orchestrator | 2025-06-03 15:28:10.600054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:10.601666 | orchestrator | Tuesday 03 June 2025 15:28:10 +0000 (0:00:00.421) 0:00:29.053 ********** 2025-06-03 15:28:10.915619 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:28:10.916031 | orchestrator | 2025-06-03 15:28:10.919383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:10.920307 | orchestrator | Tuesday 03 June 2025 15:28:10 +0000 (0:00:00.317) 0:00:29.370 ********** 2025-06-03 15:28:11.512250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-03 15:28:11.512899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-03 15:28:11.513726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-03 15:28:11.514984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-03 15:28:11.516049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-03 15:28:11.517244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-03 15:28:11.517770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-03 15:28:11.518674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-03 15:28:11.519344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-03 15:28:11.519854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-03 15:28:11.520535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-03 15:28:11.521135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-03 15:28:11.522274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-03 15:28:11.522934 | orchestrator | 2025-06-03 15:28:11.523292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:11.525761 | orchestrator | Tuesday 03 June 2025 15:28:11 +0000 (0:00:00.593) 0:00:29.964 ********** 2025-06-03 15:28:11.706217 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:11.706704 | orchestrator | 2025-06-03 15:28:11.707600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:11.707748 | orchestrator | Tuesday 03 June 2025 15:28:11 +0000 (0:00:00.198) 0:00:30.162 ********** 2025-06-03 15:28:11.919152 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:11.919308 | orchestrator | 2025-06-03 15:28:11.919326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:11.919636 | orchestrator | Tuesday 03 June 2025 15:28:11 +0000 (0:00:00.212) 0:00:30.375 ********** 2025-06-03 15:28:12.118337 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:12.119640 | orchestrator | 2025-06-03 15:28:12.121482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:12.122782 | orchestrator | Tuesday 03 June 2025 15:28:12 +0000 (0:00:00.198) 0:00:30.573 ********** 2025-06-03 15:28:12.320415 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:12.321448 | orchestrator | 2025-06-03 15:28:12.323112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:12.324635 | orchestrator | Tuesday 03 June 2025 15:28:12 +0000 (0:00:00.202) 0:00:30.775 ********** 2025-06-03 15:28:12.516923 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:12.517515 | orchestrator | 2025-06-03 15:28:12.518172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:12.518862 | orchestrator | Tuesday 03 June 2025 15:28:12 +0000 (0:00:00.196) 0:00:30.972 ********** 2025-06-03 15:28:12.719542 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:12.720861 | orchestrator | 2025-06-03 15:28:12.721896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:12.723278 | orchestrator | Tuesday 03 June 2025 15:28:12 +0000 (0:00:00.202) 0:00:31.174 ********** 2025-06-03 15:28:12.927327 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:12.927847 | orchestrator | 2025-06-03 15:28:12.928503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:12.929042 | orchestrator | Tuesday 03 June 2025 15:28:12 +0000 (0:00:00.208) 0:00:31.383 ********** 2025-06-03 15:28:13.147938 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:13.148041 | orchestrator | 2025-06-03 15:28:13.148056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:13.148070 | orchestrator | Tuesday 03 June 2025 15:28:13 +0000 (0:00:00.219) 0:00:31.603 ********** 2025-06-03 15:28:13.993335 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-03 15:28:13.994185 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-03 15:28:13.996082 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-03 15:28:13.996181 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-03 15:28:13.996817 | orchestrator | 2025-06-03 15:28:13.997255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:13.998148 | orchestrator | Tuesday 03 June 2025 15:28:13 +0000 (0:00:00.844) 0:00:32.447 ********** 2025-06-03 15:28:14.189102 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:14.190161 | orchestrator | 2025-06-03 15:28:14.190443 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:14.191336 | orchestrator | Tuesday 03 June 2025 15:28:14 +0000 (0:00:00.197) 0:00:32.644 ********** 2025-06-03 15:28:14.388321 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:14.388424 | orchestrator | 2025-06-03 15:28:14.389276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:14.390356 | orchestrator | Tuesday 03 June 2025 15:28:14 +0000 (0:00:00.197) 0:00:32.842 ********** 2025-06-03 15:28:15.064537 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:15.065016 | orchestrator | 2025-06-03 15:28:15.066167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:15.067061 | orchestrator | Tuesday 03 June 2025 15:28:15 +0000 (0:00:00.676) 0:00:33.519 ********** 2025-06-03 15:28:15.262251 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:15.263092 | orchestrator | 2025-06-03 15:28:15.263999 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-03 15:28:15.264822 | orchestrator | Tuesday 03 June 2025 15:28:15 +0000 (0:00:00.198) 0:00:33.718 ********** 2025-06-03 15:28:15.403016 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:15.403499 | orchestrator | 2025-06-03 15:28:15.403729 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-03 15:28:15.404060 | orchestrator | Tuesday 03 June 2025 15:28:15 +0000 (0:00:00.140) 0:00:33.859 ********** 2025-06-03 15:28:15.595721 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8e839e97-cc3d-5431-ae91-f94b997cade9'}}) 2025-06-03 15:28:15.596688 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1191cd60-4b8c-5454-8e42-9818af3c2595'}}) 2025-06-03 15:28:15.598147 | orchestrator | 2025-06-03 15:28:15.599313 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-03 15:28:15.600082 | orchestrator | Tuesday 03 June 2025 15:28:15 +0000 (0:00:00.191) 0:00:34.050 ********** 2025-06-03 15:28:17.360402 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'}) 2025-06-03 15:28:17.361243 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'}) 2025-06-03 15:28:17.361319 | orchestrator | 2025-06-03 15:28:17.362638 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-03 15:28:17.362846 | orchestrator | Tuesday 03 June 2025 15:28:17 +0000 (0:00:01.765) 0:00:35.815 ********** 2025-06-03 15:28:17.494380 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:17.495119 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:17.495978 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:17.496661 | orchestrator | 2025-06-03 15:28:17.497411 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-03 15:28:17.498201 | orchestrator | Tuesday 03 June 2025 15:28:17 +0000 (0:00:00.135) 0:00:35.951 ********** 2025-06-03 15:28:18.786150 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'}) 2025-06-03 15:28:18.786681 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'}) 2025-06-03 15:28:18.787664 | orchestrator | 2025-06-03 15:28:18.788594 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-03 15:28:18.790380 | orchestrator | Tuesday 03 June 2025 15:28:18 +0000 (0:00:01.290) 0:00:37.241 ********** 2025-06-03 15:28:18.922540 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:18.923161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:18.923957 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:18.924950 | orchestrator | 2025-06-03 15:28:18.926696 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-03 15:28:18.926719 | orchestrator | Tuesday 03 June 2025 15:28:18 +0000 (0:00:00.138) 0:00:37.379 ********** 2025-06-03 15:28:19.054104 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:19.055121 | orchestrator | 2025-06-03 15:28:19.056353 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-03 15:28:19.057634 | orchestrator | Tuesday 03 June 2025 15:28:19 +0000 (0:00:00.131) 0:00:37.511 ********** 2025-06-03 15:28:19.199542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:19.199979 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:19.201622 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:19.202062 | orchestrator | 2025-06-03 15:28:19.202595 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-03 15:28:19.203020 | orchestrator | Tuesday 03 June 2025 15:28:19 +0000 (0:00:00.144) 0:00:37.655 ********** 2025-06-03 15:28:19.341286 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:19.341835 | orchestrator | 2025-06-03 15:28:19.342812 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-03 15:28:19.343733 | orchestrator | Tuesday 03 June 2025 15:28:19 +0000 (0:00:00.141) 0:00:37.797 ********** 2025-06-03 15:28:19.481181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:19.482648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:19.482881 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:19.483948 | orchestrator | 2025-06-03 15:28:19.484428 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-03 15:28:19.485134 | orchestrator | Tuesday 03 June 2025 15:28:19 +0000 (0:00:00.140) 0:00:37.937 ********** 2025-06-03 15:28:19.768059 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:19.769111 | orchestrator | 2025-06-03 15:28:19.770197 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-03 15:28:19.771195 | orchestrator | Tuesday 03 June 2025 15:28:19 +0000 (0:00:00.286) 0:00:38.224 ********** 2025-06-03 15:28:19.910394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:19.910493 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:19.911827 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:19.912881 | orchestrator | 2025-06-03 15:28:19.913768 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-03 15:28:19.914911 | orchestrator | Tuesday 03 June 2025 15:28:19 +0000 (0:00:00.141) 0:00:38.365 ********** 2025-06-03 15:28:20.040958 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:20.041974 | orchestrator | 2025-06-03 15:28:20.042005 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-03 15:28:20.042695 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.133) 0:00:38.498 ********** 2025-06-03 15:28:20.194117 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:20.194196 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:20.194280 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:20.194367 | orchestrator | 2025-06-03 15:28:20.194744 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-03 15:28:20.196760 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.151) 0:00:38.650 ********** 2025-06-03 15:28:20.316723 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:20.317549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:20.321044 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:20.325997 | orchestrator | 2025-06-03 15:28:20.330255 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-03 15:28:20.331144 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.123) 0:00:38.773 ********** 2025-06-03 15:28:20.446683 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:20.447683 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:20.448137 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:20.449239 | orchestrator | 2025-06-03 15:28:20.449268 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-03 15:28:20.449667 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.131) 0:00:38.904 ********** 2025-06-03 15:28:20.572083 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:20.572248 | orchestrator | 2025-06-03 15:28:20.573434 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-03 15:28:20.574140 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.124) 0:00:39.028 ********** 2025-06-03 15:28:20.695551 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:20.696644 | orchestrator | 2025-06-03 15:28:20.696751 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-03 15:28:20.698728 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.123) 0:00:39.152 ********** 2025-06-03 15:28:20.824773 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:20.825888 | orchestrator | 2025-06-03 15:28:20.829697 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-03 15:28:20.830181 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.127) 0:00:39.280 ********** 2025-06-03 15:28:20.959032 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:28:20.959541 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-03 15:28:20.959887 | orchestrator | } 2025-06-03 15:28:20.960359 | orchestrator | 2025-06-03 15:28:20.961131 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-03 15:28:20.961658 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.136) 0:00:39.416 ********** 2025-06-03 15:28:21.091598 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:28:21.092069 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-03 15:28:21.093972 | orchestrator | } 2025-06-03 15:28:21.094384 | orchestrator | 2025-06-03 15:28:21.095621 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-03 15:28:21.096330 | orchestrator | Tuesday 03 June 2025 15:28:21 +0000 (0:00:00.131) 0:00:39.547 ********** 2025-06-03 15:28:21.213016 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:28:21.213659 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-03 15:28:21.214575 | orchestrator | } 2025-06-03 15:28:21.214885 | orchestrator | 2025-06-03 15:28:21.215688 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-03 15:28:21.216096 | orchestrator | Tuesday 03 June 2025 15:28:21 +0000 (0:00:00.122) 0:00:39.670 ********** 2025-06-03 15:28:21.798083 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:21.799658 | orchestrator | 2025-06-03 15:28:21.799678 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-03 15:28:21.800614 | orchestrator | Tuesday 03 June 2025 15:28:21 +0000 (0:00:00.584) 0:00:40.255 ********** 2025-06-03 15:28:22.304276 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:22.304756 | orchestrator | 2025-06-03 15:28:22.305296 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-03 15:28:22.305990 | orchestrator | Tuesday 03 June 2025 15:28:22 +0000 (0:00:00.505) 0:00:40.760 ********** 2025-06-03 15:28:22.771278 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:22.771759 | orchestrator | 2025-06-03 15:28:22.772660 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-03 15:28:22.773967 | orchestrator | Tuesday 03 June 2025 15:28:22 +0000 (0:00:00.466) 0:00:41.227 ********** 2025-06-03 15:28:22.920782 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:22.921224 | orchestrator | 2025-06-03 15:28:22.922157 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-03 15:28:22.922990 | orchestrator | Tuesday 03 June 2025 15:28:22 +0000 (0:00:00.147) 0:00:41.374 ********** 2025-06-03 15:28:23.035214 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:23.035292 | orchestrator | 2025-06-03 15:28:23.035567 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-03 15:28:23.036408 | orchestrator | Tuesday 03 June 2025 15:28:23 +0000 (0:00:00.115) 0:00:41.490 ********** 2025-06-03 15:28:23.163136 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:23.163394 | orchestrator | 2025-06-03 15:28:23.163854 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-03 15:28:23.164841 | orchestrator | Tuesday 03 June 2025 15:28:23 +0000 (0:00:00.129) 0:00:41.619 ********** 2025-06-03 15:28:23.290916 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:28:23.291074 | orchestrator |  "vgs_report": { 2025-06-03 15:28:23.291919 | orchestrator |  "vg": [] 2025-06-03 15:28:23.293646 | orchestrator |  } 2025-06-03 15:28:23.294594 | orchestrator | } 2025-06-03 15:28:23.294626 | orchestrator | 2025-06-03 15:28:23.294712 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-03 15:28:23.294887 | orchestrator | Tuesday 03 June 2025 15:28:23 +0000 (0:00:00.127) 0:00:41.747 ********** 2025-06-03 15:28:23.436059 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:23.437055 | orchestrator | 2025-06-03 15:28:23.438009 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-03 15:28:23.439005 | orchestrator | Tuesday 03 June 2025 15:28:23 +0000 (0:00:00.143) 0:00:41.891 ********** 2025-06-03 15:28:23.581165 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:23.581823 | orchestrator | 2025-06-03 15:28:23.582937 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-03 15:28:23.583700 | orchestrator | Tuesday 03 June 2025 15:28:23 +0000 (0:00:00.144) 0:00:42.036 ********** 2025-06-03 15:28:23.718397 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:23.719374 | orchestrator | 2025-06-03 15:28:23.720685 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-03 15:28:23.721888 | orchestrator | Tuesday 03 June 2025 15:28:23 +0000 (0:00:00.138) 0:00:42.174 ********** 2025-06-03 15:28:23.858645 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:23.858726 | orchestrator | 2025-06-03 15:28:23.859658 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-03 15:28:23.860166 | orchestrator | Tuesday 03 June 2025 15:28:23 +0000 (0:00:00.138) 0:00:42.312 ********** 2025-06-03 15:28:23.998256 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:23.998345 | orchestrator | 2025-06-03 15:28:23.998798 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-03 15:28:23.999169 | orchestrator | Tuesday 03 June 2025 15:28:23 +0000 (0:00:00.141) 0:00:42.454 ********** 2025-06-03 15:28:24.266752 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:24.267550 | orchestrator | 2025-06-03 15:28:24.268455 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-03 15:28:24.269184 | orchestrator | Tuesday 03 June 2025 15:28:24 +0000 (0:00:00.269) 0:00:42.723 ********** 2025-06-03 15:28:24.403717 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:24.405037 | orchestrator | 2025-06-03 15:28:24.405722 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-03 15:28:24.406613 | orchestrator | Tuesday 03 June 2025 15:28:24 +0000 (0:00:00.135) 0:00:42.859 ********** 2025-06-03 15:28:24.534520 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:24.538096 | orchestrator | 2025-06-03 15:28:24.538222 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-03 15:28:24.539635 | orchestrator | Tuesday 03 June 2025 15:28:24 +0000 (0:00:00.129) 0:00:42.989 ********** 2025-06-03 15:28:24.649591 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:24.649712 | orchestrator | 2025-06-03 15:28:24.650168 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-03 15:28:24.651017 | orchestrator | Tuesday 03 June 2025 15:28:24 +0000 (0:00:00.114) 0:00:43.104 ********** 2025-06-03 15:28:24.773340 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:24.773415 | orchestrator | 2025-06-03 15:28:24.773432 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-03 15:28:24.774054 | orchestrator | Tuesday 03 June 2025 15:28:24 +0000 (0:00:00.123) 0:00:43.228 ********** 2025-06-03 15:28:24.895572 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:24.896652 | orchestrator | 2025-06-03 15:28:24.898225 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-03 15:28:24.898925 | orchestrator | Tuesday 03 June 2025 15:28:24 +0000 (0:00:00.123) 0:00:43.351 ********** 2025-06-03 15:28:25.028969 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:25.029424 | orchestrator | 2025-06-03 15:28:25.029848 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-03 15:28:25.030726 | orchestrator | Tuesday 03 June 2025 15:28:25 +0000 (0:00:00.135) 0:00:43.486 ********** 2025-06-03 15:28:25.156975 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:25.157645 | orchestrator | 2025-06-03 15:28:25.158601 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-03 15:28:25.159294 | orchestrator | Tuesday 03 June 2025 15:28:25 +0000 (0:00:00.127) 0:00:43.613 ********** 2025-06-03 15:28:25.270697 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:25.270877 | orchestrator | 2025-06-03 15:28:25.272859 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-03 15:28:25.272901 | orchestrator | Tuesday 03 June 2025 15:28:25 +0000 (0:00:00.113) 0:00:43.727 ********** 2025-06-03 15:28:25.402822 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:25.403296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:25.403979 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:25.405140 | orchestrator | 2025-06-03 15:28:25.405904 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-03 15:28:25.406690 | orchestrator | Tuesday 03 June 2025 15:28:25 +0000 (0:00:00.131) 0:00:43.859 ********** 2025-06-03 15:28:25.548980 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:25.549169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:25.550195 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:25.550820 | orchestrator | 2025-06-03 15:28:25.551834 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-03 15:28:25.553139 | orchestrator | Tuesday 03 June 2025 15:28:25 +0000 (0:00:00.145) 0:00:44.004 ********** 2025-06-03 15:28:25.694317 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:25.695530 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:25.696779 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:25.697887 | orchestrator | 2025-06-03 15:28:25.698758 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-03 15:28:25.699597 | orchestrator | Tuesday 03 June 2025 15:28:25 +0000 (0:00:00.146) 0:00:44.150 ********** 2025-06-03 15:28:25.962304 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:25.962770 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:25.963567 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:25.964227 | orchestrator | 2025-06-03 15:28:25.964867 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-03 15:28:25.965593 | orchestrator | Tuesday 03 June 2025 15:28:25 +0000 (0:00:00.266) 0:00:44.417 ********** 2025-06-03 15:28:26.115760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:26.115861 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:26.115936 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:26.116721 | orchestrator | 2025-06-03 15:28:26.117035 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-03 15:28:26.117623 | orchestrator | Tuesday 03 June 2025 15:28:26 +0000 (0:00:00.154) 0:00:44.572 ********** 2025-06-03 15:28:26.278068 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:26.278215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:26.279048 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:26.279942 | orchestrator | 2025-06-03 15:28:26.280560 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-03 15:28:26.281363 | orchestrator | Tuesday 03 June 2025 15:28:26 +0000 (0:00:00.163) 0:00:44.735 ********** 2025-06-03 15:28:26.418551 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:26.420179 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:26.422912 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:26.422988 | orchestrator | 2025-06-03 15:28:26.423684 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-03 15:28:26.424817 | orchestrator | Tuesday 03 June 2025 15:28:26 +0000 (0:00:00.140) 0:00:44.875 ********** 2025-06-03 15:28:26.549632 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:26.549878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:26.551051 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:26.551553 | orchestrator | 2025-06-03 15:28:26.552588 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-03 15:28:26.553119 | orchestrator | Tuesday 03 June 2025 15:28:26 +0000 (0:00:00.129) 0:00:45.005 ********** 2025-06-03 15:28:27.042293 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:27.042396 | orchestrator | 2025-06-03 15:28:27.042408 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-03 15:28:27.042426 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.492) 0:00:45.498 ********** 2025-06-03 15:28:27.501998 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:27.502152 | orchestrator | 2025-06-03 15:28:27.502230 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-03 15:28:27.503073 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.458) 0:00:45.956 ********** 2025-06-03 15:28:27.632106 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:27.632805 | orchestrator | 2025-06-03 15:28:27.633510 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-03 15:28:27.634276 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.132) 0:00:46.089 ********** 2025-06-03 15:28:27.803895 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'vg_name': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'}) 2025-06-03 15:28:27.803972 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'vg_name': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'}) 2025-06-03 15:28:27.805188 | orchestrator | 2025-06-03 15:28:27.805538 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-03 15:28:27.806517 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.171) 0:00:46.260 ********** 2025-06-03 15:28:27.942831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:27.943657 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:27.944241 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:27.945195 | orchestrator | 2025-06-03 15:28:27.946503 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-03 15:28:27.947655 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.138) 0:00:46.398 ********** 2025-06-03 15:28:28.096212 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:28.098582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:28.098618 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:28.098632 | orchestrator | 2025-06-03 15:28:28.099750 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-03 15:28:28.100421 | orchestrator | Tuesday 03 June 2025 15:28:28 +0000 (0:00:00.153) 0:00:46.552 ********** 2025-06-03 15:28:28.238089 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'})  2025-06-03 15:28:28.238853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'})  2025-06-03 15:28:28.239321 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:28.240077 | orchestrator | 2025-06-03 15:28:28.240947 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-03 15:28:28.241590 | orchestrator | Tuesday 03 June 2025 15:28:28 +0000 (0:00:00.140) 0:00:46.692 ********** 2025-06-03 15:28:28.622292 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:28:28.623569 | orchestrator |  "lvm_report": { 2025-06-03 15:28:28.624288 | orchestrator |  "lv": [ 2025-06-03 15:28:28.624970 | orchestrator |  { 2025-06-03 15:28:28.625961 | orchestrator |  "lv_name": "osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595", 2025-06-03 15:28:28.626752 | orchestrator |  "vg_name": "ceph-1191cd60-4b8c-5454-8e42-9818af3c2595" 2025-06-03 15:28:28.627529 | orchestrator |  }, 2025-06-03 15:28:28.628157 | orchestrator |  { 2025-06-03 15:28:28.628887 | orchestrator |  "lv_name": "osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9", 2025-06-03 15:28:28.629313 | orchestrator |  "vg_name": "ceph-8e839e97-cc3d-5431-ae91-f94b997cade9" 2025-06-03 15:28:28.630648 | orchestrator |  } 2025-06-03 15:28:28.631175 | orchestrator |  ], 2025-06-03 15:28:28.631620 | orchestrator |  "pv": [ 2025-06-03 15:28:28.632219 | orchestrator |  { 2025-06-03 15:28:28.632873 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-03 15:28:28.633333 | orchestrator |  "vg_name": "ceph-8e839e97-cc3d-5431-ae91-f94b997cade9" 2025-06-03 15:28:28.633775 | orchestrator |  }, 2025-06-03 15:28:28.635140 | orchestrator |  { 2025-06-03 15:28:28.635539 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-03 15:28:28.636799 | orchestrator |  "vg_name": "ceph-1191cd60-4b8c-5454-8e42-9818af3c2595" 2025-06-03 15:28:28.637552 | orchestrator |  } 2025-06-03 15:28:28.638417 | orchestrator |  ] 2025-06-03 15:28:28.638909 | orchestrator |  } 2025-06-03 15:28:28.639804 | orchestrator | } 2025-06-03 15:28:28.640299 | orchestrator | 2025-06-03 15:28:28.641025 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-03 15:28:28.641724 | orchestrator | 2025-06-03 15:28:28.642079 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:28:28.642647 | orchestrator | Tuesday 03 June 2025 15:28:28 +0000 (0:00:00.386) 0:00:47.079 ********** 2025-06-03 15:28:28.845843 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-03 15:28:28.846378 | orchestrator | 2025-06-03 15:28:28.846878 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:28:28.847874 | orchestrator | Tuesday 03 June 2025 15:28:28 +0000 (0:00:00.222) 0:00:47.302 ********** 2025-06-03 15:28:29.048630 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:29.048808 | orchestrator | 2025-06-03 15:28:29.049725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:29.049753 | orchestrator | Tuesday 03 June 2025 15:28:29 +0000 (0:00:00.202) 0:00:47.504 ********** 2025-06-03 15:28:29.408353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-03 15:28:29.409632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-03 15:28:29.410563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-03 15:28:29.411433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-03 15:28:29.412706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-03 15:28:29.413213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-03 15:28:29.414095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-03 15:28:29.415411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-03 15:28:29.416113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-03 15:28:29.416542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-03 15:28:29.417756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-03 15:28:29.418816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-03 15:28:29.419133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-03 15:28:29.419939 | orchestrator | 2025-06-03 15:28:29.420618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:29.421533 | orchestrator | Tuesday 03 June 2025 15:28:29 +0000 (0:00:00.359) 0:00:47.864 ********** 2025-06-03 15:28:29.586903 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:29.587066 | orchestrator | 2025-06-03 15:28:29.587709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:29.588352 | orchestrator | Tuesday 03 June 2025 15:28:29 +0000 (0:00:00.179) 0:00:48.043 ********** 2025-06-03 15:28:29.765700 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:29.766339 | orchestrator | 2025-06-03 15:28:29.767489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:29.768158 | orchestrator | Tuesday 03 June 2025 15:28:29 +0000 (0:00:00.178) 0:00:48.222 ********** 2025-06-03 15:28:29.950418 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:29.951142 | orchestrator | 2025-06-03 15:28:29.951843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:29.952653 | orchestrator | Tuesday 03 June 2025 15:28:29 +0000 (0:00:00.184) 0:00:48.407 ********** 2025-06-03 15:28:30.120749 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:30.120946 | orchestrator | 2025-06-03 15:28:30.121173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:30.121731 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.170) 0:00:48.578 ********** 2025-06-03 15:28:30.306286 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:30.306963 | orchestrator | 2025-06-03 15:28:30.308416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:30.309233 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.184) 0:00:48.763 ********** 2025-06-03 15:28:30.773623 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:30.775386 | orchestrator | 2025-06-03 15:28:30.775424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:30.776600 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.467) 0:00:49.230 ********** 2025-06-03 15:28:30.952628 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:30.954151 | orchestrator | 2025-06-03 15:28:30.955360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:30.956597 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.178) 0:00:49.409 ********** 2025-06-03 15:28:31.130798 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:31.131747 | orchestrator | 2025-06-03 15:28:31.132555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:31.133293 | orchestrator | Tuesday 03 June 2025 15:28:31 +0000 (0:00:00.178) 0:00:49.587 ********** 2025-06-03 15:28:31.520792 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df) 2025-06-03 15:28:31.521860 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df) 2025-06-03 15:28:31.522658 | orchestrator | 2025-06-03 15:28:31.523414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:31.523988 | orchestrator | Tuesday 03 June 2025 15:28:31 +0000 (0:00:00.389) 0:00:49.977 ********** 2025-06-03 15:28:31.898253 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9) 2025-06-03 15:28:31.898405 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9) 2025-06-03 15:28:31.899565 | orchestrator | 2025-06-03 15:28:31.899767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:31.900623 | orchestrator | Tuesday 03 June 2025 15:28:31 +0000 (0:00:00.377) 0:00:50.354 ********** 2025-06-03 15:28:32.305420 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057) 2025-06-03 15:28:32.307159 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057) 2025-06-03 15:28:32.307233 | orchestrator | 2025-06-03 15:28:32.307708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:32.308147 | orchestrator | Tuesday 03 June 2025 15:28:32 +0000 (0:00:00.406) 0:00:50.761 ********** 2025-06-03 15:28:32.706339 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447) 2025-06-03 15:28:32.706789 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447) 2025-06-03 15:28:32.707965 | orchestrator | 2025-06-03 15:28:32.708610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:32.709382 | orchestrator | Tuesday 03 June 2025 15:28:32 +0000 (0:00:00.400) 0:00:51.161 ********** 2025-06-03 15:28:33.007919 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:28:33.008678 | orchestrator | 2025-06-03 15:28:33.009434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:33.010350 | orchestrator | Tuesday 03 June 2025 15:28:33 +0000 (0:00:00.302) 0:00:51.464 ********** 2025-06-03 15:28:33.387254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-03 15:28:33.387521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-03 15:28:33.389022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-03 15:28:33.389835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-03 15:28:33.390666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-03 15:28:33.391780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-03 15:28:33.392612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-03 15:28:33.393142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-03 15:28:33.393703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-03 15:28:33.394378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-03 15:28:33.394753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-03 15:28:33.395397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-03 15:28:33.395800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-03 15:28:33.396312 | orchestrator | 2025-06-03 15:28:33.396805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:33.397185 | orchestrator | Tuesday 03 June 2025 15:28:33 +0000 (0:00:00.378) 0:00:51.842 ********** 2025-06-03 15:28:33.569914 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:33.570293 | orchestrator | 2025-06-03 15:28:33.570570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:33.571521 | orchestrator | Tuesday 03 June 2025 15:28:33 +0000 (0:00:00.183) 0:00:52.026 ********** 2025-06-03 15:28:33.759202 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:33.759912 | orchestrator | 2025-06-03 15:28:33.760967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:33.761726 | orchestrator | Tuesday 03 June 2025 15:28:33 +0000 (0:00:00.189) 0:00:52.216 ********** 2025-06-03 15:28:34.235522 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:34.235911 | orchestrator | 2025-06-03 15:28:34.236591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:34.237091 | orchestrator | Tuesday 03 June 2025 15:28:34 +0000 (0:00:00.476) 0:00:52.692 ********** 2025-06-03 15:28:34.428843 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:34.428963 | orchestrator | 2025-06-03 15:28:34.430108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:34.430844 | orchestrator | Tuesday 03 June 2025 15:28:34 +0000 (0:00:00.192) 0:00:52.885 ********** 2025-06-03 15:28:34.624607 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:34.624759 | orchestrator | 2025-06-03 15:28:34.626535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:34.627198 | orchestrator | Tuesday 03 June 2025 15:28:34 +0000 (0:00:00.196) 0:00:53.081 ********** 2025-06-03 15:28:34.806765 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:34.806884 | orchestrator | 2025-06-03 15:28:34.807329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:34.807353 | orchestrator | Tuesday 03 June 2025 15:28:34 +0000 (0:00:00.182) 0:00:53.263 ********** 2025-06-03 15:28:34.988720 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:34.989224 | orchestrator | 2025-06-03 15:28:34.990882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:34.991063 | orchestrator | Tuesday 03 June 2025 15:28:34 +0000 (0:00:00.181) 0:00:53.445 ********** 2025-06-03 15:28:35.206292 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:35.206568 | orchestrator | 2025-06-03 15:28:35.206942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:35.207559 | orchestrator | Tuesday 03 June 2025 15:28:35 +0000 (0:00:00.215) 0:00:53.661 ********** 2025-06-03 15:28:35.793635 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-03 15:28:35.794416 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-03 15:28:35.795203 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-03 15:28:35.795940 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-03 15:28:35.797089 | orchestrator | 2025-06-03 15:28:35.798091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:35.798693 | orchestrator | Tuesday 03 June 2025 15:28:35 +0000 (0:00:00.587) 0:00:54.248 ********** 2025-06-03 15:28:35.974084 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:35.974170 | orchestrator | 2025-06-03 15:28:35.974262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:35.974891 | orchestrator | Tuesday 03 June 2025 15:28:35 +0000 (0:00:00.182) 0:00:54.430 ********** 2025-06-03 15:28:36.144069 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:36.144240 | orchestrator | 2025-06-03 15:28:36.144355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:36.145243 | orchestrator | Tuesday 03 June 2025 15:28:36 +0000 (0:00:00.169) 0:00:54.600 ********** 2025-06-03 15:28:36.324989 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:36.325086 | orchestrator | 2025-06-03 15:28:36.325619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:36.326104 | orchestrator | Tuesday 03 June 2025 15:28:36 +0000 (0:00:00.181) 0:00:54.782 ********** 2025-06-03 15:28:36.508887 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:36.509101 | orchestrator | 2025-06-03 15:28:36.509591 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-03 15:28:36.510881 | orchestrator | Tuesday 03 June 2025 15:28:36 +0000 (0:00:00.183) 0:00:54.965 ********** 2025-06-03 15:28:36.785618 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:36.786334 | orchestrator | 2025-06-03 15:28:36.787378 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-03 15:28:36.788134 | orchestrator | Tuesday 03 June 2025 15:28:36 +0000 (0:00:00.276) 0:00:55.242 ********** 2025-06-03 15:28:36.956050 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53b632c4-9781-517b-ad8e-3b37c9789a01'}}) 2025-06-03 15:28:36.957668 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'}}) 2025-06-03 15:28:36.957880 | orchestrator | 2025-06-03 15:28:36.959231 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-03 15:28:36.959853 | orchestrator | Tuesday 03 June 2025 15:28:36 +0000 (0:00:00.169) 0:00:55.412 ********** 2025-06-03 15:28:38.836595 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'}) 2025-06-03 15:28:38.838259 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'}) 2025-06-03 15:28:38.839144 | orchestrator | 2025-06-03 15:28:38.842173 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-03 15:28:38.842227 | orchestrator | Tuesday 03 June 2025 15:28:38 +0000 (0:00:01.878) 0:00:57.290 ********** 2025-06-03 15:28:39.009055 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:39.010730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:39.012067 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:39.012908 | orchestrator | 2025-06-03 15:28:39.013834 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-03 15:28:39.014556 | orchestrator | Tuesday 03 June 2025 15:28:39 +0000 (0:00:00.172) 0:00:57.463 ********** 2025-06-03 15:28:40.312058 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'}) 2025-06-03 15:28:40.313108 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'}) 2025-06-03 15:28:40.314425 | orchestrator | 2025-06-03 15:28:40.314452 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-03 15:28:40.315282 | orchestrator | Tuesday 03 June 2025 15:28:40 +0000 (0:00:01.303) 0:00:58.766 ********** 2025-06-03 15:28:40.475852 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:40.475944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:40.476078 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:40.476969 | orchestrator | 2025-06-03 15:28:40.477223 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-03 15:28:40.477943 | orchestrator | Tuesday 03 June 2025 15:28:40 +0000 (0:00:00.164) 0:00:58.931 ********** 2025-06-03 15:28:40.625094 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:40.626195 | orchestrator | 2025-06-03 15:28:40.627396 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-03 15:28:40.629655 | orchestrator | Tuesday 03 June 2025 15:28:40 +0000 (0:00:00.147) 0:00:59.079 ********** 2025-06-03 15:28:40.781288 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:40.782251 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:40.783079 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:40.785054 | orchestrator | 2025-06-03 15:28:40.785099 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-03 15:28:40.785133 | orchestrator | Tuesday 03 June 2025 15:28:40 +0000 (0:00:00.157) 0:00:59.236 ********** 2025-06-03 15:28:40.918630 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:40.919892 | orchestrator | 2025-06-03 15:28:40.920581 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-03 15:28:40.922300 | orchestrator | Tuesday 03 June 2025 15:28:40 +0000 (0:00:00.136) 0:00:59.373 ********** 2025-06-03 15:28:41.071971 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:41.072080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:41.072096 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:41.072712 | orchestrator | 2025-06-03 15:28:41.073460 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-03 15:28:41.074139 | orchestrator | Tuesday 03 June 2025 15:28:41 +0000 (0:00:00.153) 0:00:59.526 ********** 2025-06-03 15:28:41.214286 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:41.215024 | orchestrator | 2025-06-03 15:28:41.216532 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-03 15:28:41.217412 | orchestrator | Tuesday 03 June 2025 15:28:41 +0000 (0:00:00.143) 0:00:59.670 ********** 2025-06-03 15:28:41.365941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:41.366171 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:41.366586 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:41.366912 | orchestrator | 2025-06-03 15:28:41.367728 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-03 15:28:41.368145 | orchestrator | Tuesday 03 June 2025 15:28:41 +0000 (0:00:00.152) 0:00:59.822 ********** 2025-06-03 15:28:41.499951 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:41.502202 | orchestrator | 2025-06-03 15:28:41.503694 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-03 15:28:41.503729 | orchestrator | Tuesday 03 June 2025 15:28:41 +0000 (0:00:00.132) 0:00:59.955 ********** 2025-06-03 15:28:41.869955 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:41.870367 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:41.871207 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:41.872127 | orchestrator | 2025-06-03 15:28:41.872736 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-03 15:28:41.874883 | orchestrator | Tuesday 03 June 2025 15:28:41 +0000 (0:00:00.370) 0:01:00.326 ********** 2025-06-03 15:28:42.049442 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:42.050366 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:42.051556 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:42.053029 | orchestrator | 2025-06-03 15:28:42.053103 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-03 15:28:42.053639 | orchestrator | Tuesday 03 June 2025 15:28:42 +0000 (0:00:00.177) 0:01:00.504 ********** 2025-06-03 15:28:42.216017 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:42.216104 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:42.216586 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:42.217756 | orchestrator | 2025-06-03 15:28:42.218270 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-03 15:28:42.218930 | orchestrator | Tuesday 03 June 2025 15:28:42 +0000 (0:00:00.165) 0:01:00.669 ********** 2025-06-03 15:28:42.358553 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:42.358945 | orchestrator | 2025-06-03 15:28:42.360558 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-03 15:28:42.360927 | orchestrator | Tuesday 03 June 2025 15:28:42 +0000 (0:00:00.144) 0:01:00.814 ********** 2025-06-03 15:28:42.510645 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:42.511457 | orchestrator | 2025-06-03 15:28:42.511535 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-03 15:28:42.511559 | orchestrator | Tuesday 03 June 2025 15:28:42 +0000 (0:00:00.145) 0:01:00.960 ********** 2025-06-03 15:28:42.648162 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:42.713737 | orchestrator | 2025-06-03 15:28:42.713803 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-03 15:28:42.713817 | orchestrator | Tuesday 03 June 2025 15:28:42 +0000 (0:00:00.144) 0:01:01.104 ********** 2025-06-03 15:28:42.802168 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:28:42.802653 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-03 15:28:42.804292 | orchestrator | } 2025-06-03 15:28:42.805424 | orchestrator | 2025-06-03 15:28:42.806819 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-03 15:28:42.807147 | orchestrator | Tuesday 03 June 2025 15:28:42 +0000 (0:00:00.153) 0:01:01.257 ********** 2025-06-03 15:28:42.954331 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:28:42.955063 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-03 15:28:42.956840 | orchestrator | } 2025-06-03 15:28:42.956865 | orchestrator | 2025-06-03 15:28:42.957741 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-03 15:28:42.958419 | orchestrator | Tuesday 03 June 2025 15:28:42 +0000 (0:00:00.150) 0:01:01.408 ********** 2025-06-03 15:28:43.101983 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:28:43.102227 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-03 15:28:43.102247 | orchestrator | } 2025-06-03 15:28:43.102801 | orchestrator | 2025-06-03 15:28:43.103735 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-03 15:28:43.103832 | orchestrator | Tuesday 03 June 2025 15:28:43 +0000 (0:00:00.149) 0:01:01.557 ********** 2025-06-03 15:28:43.614775 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:43.615815 | orchestrator | 2025-06-03 15:28:43.616114 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-03 15:28:43.616545 | orchestrator | Tuesday 03 June 2025 15:28:43 +0000 (0:00:00.513) 0:01:02.070 ********** 2025-06-03 15:28:44.116894 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:44.117066 | orchestrator | 2025-06-03 15:28:44.117637 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-03 15:28:44.117938 | orchestrator | Tuesday 03 June 2025 15:28:44 +0000 (0:00:00.501) 0:01:02.572 ********** 2025-06-03 15:28:44.612830 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:44.613201 | orchestrator | 2025-06-03 15:28:44.614699 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-03 15:28:44.615727 | orchestrator | Tuesday 03 June 2025 15:28:44 +0000 (0:00:00.495) 0:01:03.067 ********** 2025-06-03 15:28:44.970402 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:44.971298 | orchestrator | 2025-06-03 15:28:44.973612 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-03 15:28:44.974452 | orchestrator | Tuesday 03 June 2025 15:28:44 +0000 (0:00:00.357) 0:01:03.425 ********** 2025-06-03 15:28:45.080887 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:45.080986 | orchestrator | 2025-06-03 15:28:45.081540 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-03 15:28:45.081963 | orchestrator | Tuesday 03 June 2025 15:28:45 +0000 (0:00:00.111) 0:01:03.537 ********** 2025-06-03 15:28:45.180085 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:45.180207 | orchestrator | 2025-06-03 15:28:45.180617 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-03 15:28:45.181107 | orchestrator | Tuesday 03 June 2025 15:28:45 +0000 (0:00:00.099) 0:01:03.636 ********** 2025-06-03 15:28:45.328978 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:28:45.329312 | orchestrator |  "vgs_report": { 2025-06-03 15:28:45.330413 | orchestrator |  "vg": [] 2025-06-03 15:28:45.331148 | orchestrator |  } 2025-06-03 15:28:45.331956 | orchestrator | } 2025-06-03 15:28:45.333232 | orchestrator | 2025-06-03 15:28:45.333846 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-03 15:28:45.334624 | orchestrator | Tuesday 03 June 2025 15:28:45 +0000 (0:00:00.148) 0:01:03.785 ********** 2025-06-03 15:28:45.450867 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:45.451058 | orchestrator | 2025-06-03 15:28:45.452324 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-03 15:28:45.452348 | orchestrator | Tuesday 03 June 2025 15:28:45 +0000 (0:00:00.121) 0:01:03.907 ********** 2025-06-03 15:28:45.569736 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:45.569933 | orchestrator | 2025-06-03 15:28:45.570520 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-03 15:28:45.571266 | orchestrator | Tuesday 03 June 2025 15:28:45 +0000 (0:00:00.118) 0:01:04.026 ********** 2025-06-03 15:28:45.716041 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:45.717059 | orchestrator | 2025-06-03 15:28:45.718146 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-03 15:28:45.719142 | orchestrator | Tuesday 03 June 2025 15:28:45 +0000 (0:00:00.146) 0:01:04.172 ********** 2025-06-03 15:28:45.858307 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:45.858580 | orchestrator | 2025-06-03 15:28:45.859460 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-03 15:28:45.860288 | orchestrator | Tuesday 03 June 2025 15:28:45 +0000 (0:00:00.140) 0:01:04.313 ********** 2025-06-03 15:28:45.986928 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:45.987881 | orchestrator | 2025-06-03 15:28:45.988716 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-03 15:28:45.990278 | orchestrator | Tuesday 03 June 2025 15:28:45 +0000 (0:00:00.128) 0:01:04.441 ********** 2025-06-03 15:28:46.111748 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:46.112044 | orchestrator | 2025-06-03 15:28:46.112673 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-03 15:28:46.113182 | orchestrator | Tuesday 03 June 2025 15:28:46 +0000 (0:00:00.127) 0:01:04.568 ********** 2025-06-03 15:28:46.245094 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:46.246100 | orchestrator | 2025-06-03 15:28:46.246967 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-03 15:28:46.248070 | orchestrator | Tuesday 03 June 2025 15:28:46 +0000 (0:00:00.132) 0:01:04.700 ********** 2025-06-03 15:28:46.378258 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:46.379179 | orchestrator | 2025-06-03 15:28:46.380645 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-03 15:28:46.381468 | orchestrator | Tuesday 03 June 2025 15:28:46 +0000 (0:00:00.133) 0:01:04.834 ********** 2025-06-03 15:28:46.626775 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:46.627195 | orchestrator | 2025-06-03 15:28:46.627947 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-03 15:28:46.628987 | orchestrator | Tuesday 03 June 2025 15:28:46 +0000 (0:00:00.248) 0:01:05.082 ********** 2025-06-03 15:28:46.758689 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:46.758919 | orchestrator | 2025-06-03 15:28:46.758947 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-03 15:28:46.759319 | orchestrator | Tuesday 03 June 2025 15:28:46 +0000 (0:00:00.132) 0:01:05.215 ********** 2025-06-03 15:28:46.890894 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:46.892418 | orchestrator | 2025-06-03 15:28:46.894005 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-03 15:28:46.894469 | orchestrator | Tuesday 03 June 2025 15:28:46 +0000 (0:00:00.130) 0:01:05.346 ********** 2025-06-03 15:28:47.021848 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:47.022371 | orchestrator | 2025-06-03 15:28:47.023401 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-03 15:28:47.024254 | orchestrator | Tuesday 03 June 2025 15:28:47 +0000 (0:00:00.130) 0:01:05.476 ********** 2025-06-03 15:28:47.152408 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:47.153003 | orchestrator | 2025-06-03 15:28:47.154176 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-03 15:28:47.154989 | orchestrator | Tuesday 03 June 2025 15:28:47 +0000 (0:00:00.132) 0:01:05.608 ********** 2025-06-03 15:28:47.281390 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:47.281845 | orchestrator | 2025-06-03 15:28:47.282719 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-03 15:28:47.283930 | orchestrator | Tuesday 03 June 2025 15:28:47 +0000 (0:00:00.128) 0:01:05.737 ********** 2025-06-03 15:28:47.434180 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:47.434273 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:47.435465 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:47.436193 | orchestrator | 2025-06-03 15:28:47.437141 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-03 15:28:47.438578 | orchestrator | Tuesday 03 June 2025 15:28:47 +0000 (0:00:00.151) 0:01:05.888 ********** 2025-06-03 15:28:47.576868 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:47.578270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:47.578317 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:47.579040 | orchestrator | 2025-06-03 15:28:47.579780 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-03 15:28:47.580671 | orchestrator | Tuesday 03 June 2025 15:28:47 +0000 (0:00:00.143) 0:01:06.032 ********** 2025-06-03 15:28:47.714403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:47.714556 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:47.714659 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:47.715299 | orchestrator | 2025-06-03 15:28:47.715322 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-03 15:28:47.716724 | orchestrator | Tuesday 03 June 2025 15:28:47 +0000 (0:00:00.139) 0:01:06.171 ********** 2025-06-03 15:28:47.847301 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:47.847633 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:47.848046 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:47.849007 | orchestrator | 2025-06-03 15:28:47.849537 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-03 15:28:47.849870 | orchestrator | Tuesday 03 June 2025 15:28:47 +0000 (0:00:00.133) 0:01:06.304 ********** 2025-06-03 15:28:47.996547 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:47.996648 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:47.997739 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:47.999038 | orchestrator | 2025-06-03 15:28:47.999989 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-03 15:28:48.000931 | orchestrator | Tuesday 03 June 2025 15:28:47 +0000 (0:00:00.148) 0:01:06.453 ********** 2025-06-03 15:28:48.121674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:48.121867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:48.122653 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:48.123125 | orchestrator | 2025-06-03 15:28:48.123656 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-03 15:28:48.124110 | orchestrator | Tuesday 03 June 2025 15:28:48 +0000 (0:00:00.124) 0:01:06.577 ********** 2025-06-03 15:28:48.373933 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:48.374819 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:48.375824 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:48.376430 | orchestrator | 2025-06-03 15:28:48.377243 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-03 15:28:48.378465 | orchestrator | Tuesday 03 June 2025 15:28:48 +0000 (0:00:00.252) 0:01:06.829 ********** 2025-06-03 15:28:48.501752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:48.501890 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:48.501908 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:48.502221 | orchestrator | 2025-06-03 15:28:48.502474 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-03 15:28:48.502653 | orchestrator | Tuesday 03 June 2025 15:28:48 +0000 (0:00:00.129) 0:01:06.959 ********** 2025-06-03 15:28:49.002895 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:49.003798 | orchestrator | 2025-06-03 15:28:49.004860 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-03 15:28:49.005883 | orchestrator | Tuesday 03 June 2025 15:28:48 +0000 (0:00:00.500) 0:01:07.460 ********** 2025-06-03 15:28:49.517221 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:49.517288 | orchestrator | 2025-06-03 15:28:49.518215 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-03 15:28:49.519379 | orchestrator | Tuesday 03 June 2025 15:28:49 +0000 (0:00:00.511) 0:01:07.971 ********** 2025-06-03 15:28:49.649514 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:49.650307 | orchestrator | 2025-06-03 15:28:49.651946 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-03 15:28:49.653840 | orchestrator | Tuesday 03 June 2025 15:28:49 +0000 (0:00:00.135) 0:01:08.106 ********** 2025-06-03 15:28:49.808947 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'vg_name': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'}) 2025-06-03 15:28:49.810218 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'vg_name': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'}) 2025-06-03 15:28:49.810863 | orchestrator | 2025-06-03 15:28:49.811735 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-03 15:28:49.813094 | orchestrator | Tuesday 03 June 2025 15:28:49 +0000 (0:00:00.158) 0:01:08.265 ********** 2025-06-03 15:28:49.935674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:49.936126 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:49.936805 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:49.937615 | orchestrator | 2025-06-03 15:28:49.938516 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-03 15:28:49.939281 | orchestrator | Tuesday 03 June 2025 15:28:49 +0000 (0:00:00.126) 0:01:08.391 ********** 2025-06-03 15:28:50.080247 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:50.080680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:50.080940 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:50.081918 | orchestrator | 2025-06-03 15:28:50.082441 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-03 15:28:50.083162 | orchestrator | Tuesday 03 June 2025 15:28:50 +0000 (0:00:00.144) 0:01:08.535 ********** 2025-06-03 15:28:50.215534 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'})  2025-06-03 15:28:50.216043 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'})  2025-06-03 15:28:50.218992 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:50.220099 | orchestrator | 2025-06-03 15:28:50.221397 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-03 15:28:50.222163 | orchestrator | Tuesday 03 June 2025 15:28:50 +0000 (0:00:00.133) 0:01:08.669 ********** 2025-06-03 15:28:50.341552 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:28:50.341739 | orchestrator |  "lvm_report": { 2025-06-03 15:28:50.342554 | orchestrator |  "lv": [ 2025-06-03 15:28:50.343666 | orchestrator |  { 2025-06-03 15:28:50.344851 | orchestrator |  "lv_name": "osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01", 2025-06-03 15:28:50.345661 | orchestrator |  "vg_name": "ceph-53b632c4-9781-517b-ad8e-3b37c9789a01" 2025-06-03 15:28:50.346413 | orchestrator |  }, 2025-06-03 15:28:50.346649 | orchestrator |  { 2025-06-03 15:28:50.347708 | orchestrator |  "lv_name": "osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5", 2025-06-03 15:28:50.348774 | orchestrator |  "vg_name": "ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5" 2025-06-03 15:28:50.350194 | orchestrator |  } 2025-06-03 15:28:50.352341 | orchestrator |  ], 2025-06-03 15:28:50.355817 | orchestrator |  "pv": [ 2025-06-03 15:28:50.356776 | orchestrator |  { 2025-06-03 15:28:50.359732 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-03 15:28:50.360615 | orchestrator |  "vg_name": "ceph-53b632c4-9781-517b-ad8e-3b37c9789a01" 2025-06-03 15:28:50.361167 | orchestrator |  }, 2025-06-03 15:28:50.361745 | orchestrator |  { 2025-06-03 15:28:50.362336 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-03 15:28:50.362996 | orchestrator |  "vg_name": "ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5" 2025-06-03 15:28:50.363516 | orchestrator |  } 2025-06-03 15:28:50.364071 | orchestrator |  ] 2025-06-03 15:28:50.364784 | orchestrator |  } 2025-06-03 15:28:50.366302 | orchestrator | } 2025-06-03 15:28:50.366442 | orchestrator | 2025-06-03 15:28:50.366987 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:28:50.367333 | orchestrator | 2025-06-03 15:28:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:28:50.367613 | orchestrator | 2025-06-03 15:28:50 | INFO  | Please wait and do not abort execution. 2025-06-03 15:28:50.368091 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-03 15:28:50.368691 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-03 15:28:50.369025 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-03 15:28:50.369402 | orchestrator | 2025-06-03 15:28:50.370209 | orchestrator | 2025-06-03 15:28:50.371429 | orchestrator | 2025-06-03 15:28:50.372658 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:28:50.374773 | orchestrator | Tuesday 03 June 2025 15:28:50 +0000 (0:00:00.127) 0:01:08.797 ********** 2025-06-03 15:28:50.375535 | orchestrator | =============================================================================== 2025-06-03 15:28:50.376179 | orchestrator | Create block VGs -------------------------------------------------------- 5.58s 2025-06-03 15:28:50.376857 | orchestrator | Create block LVs -------------------------------------------------------- 4.07s 2025-06-03 15:28:50.379695 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.71s 2025-06-03 15:28:50.379744 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.50s 2025-06-03 15:28:50.379755 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.49s 2025-06-03 15:28:50.379765 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.48s 2025-06-03 15:28:50.380038 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.45s 2025-06-03 15:28:50.380558 | orchestrator | Add known partitions to the list of available block devices ------------- 1.40s 2025-06-03 15:28:50.381197 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2025-06-03 15:28:50.381930 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2025-06-03 15:28:50.382477 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-06-03 15:28:50.382955 | orchestrator | Print LVM report data --------------------------------------------------- 0.80s 2025-06-03 15:28:50.383634 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-06-03 15:28:50.384020 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2025-06-03 15:28:50.387605 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.68s 2025-06-03 15:28:50.387644 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-06-03 15:28:50.387651 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-06-03 15:28:50.387656 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-06-03 15:28:50.387661 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.66s 2025-06-03 15:28:50.387666 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-06-03 15:28:52.652296 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:28:52.652402 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:28:52.652414 | orchestrator | Registering Redlock._release_script 2025-06-03 15:28:52.722870 | orchestrator | 2025-06-03 15:28:52 | INFO  | Task 19e8cdad-b1ae-4dfb-bf0d-7a2be7520378 (facts) was prepared for execution. 2025-06-03 15:28:52.722940 | orchestrator | 2025-06-03 15:28:52 | INFO  | It takes a moment until task 19e8cdad-b1ae-4dfb-bf0d-7a2be7520378 (facts) has been started and output is visible here. 2025-06-03 15:28:56.921203 | orchestrator | 2025-06-03 15:28:56.922164 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-03 15:28:56.922853 | orchestrator | 2025-06-03 15:28:56.923374 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-03 15:28:56.928017 | orchestrator | Tuesday 03 June 2025 15:28:56 +0000 (0:00:00.284) 0:00:00.284 ********** 2025-06-03 15:28:57.963730 | orchestrator | ok: [testbed-manager] 2025-06-03 15:28:57.964881 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:28:57.967506 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:28:57.968730 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:28:57.969051 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:57.969777 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:57.970767 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:57.971452 | orchestrator | 2025-06-03 15:28:57.972337 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-03 15:28:57.974514 | orchestrator | Tuesday 03 June 2025 15:28:57 +0000 (0:00:01.041) 0:00:01.325 ********** 2025-06-03 15:28:58.126178 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:28:58.207811 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:28:58.288731 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:28:58.368027 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:28:58.444774 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:59.182998 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:59.186573 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:28:59.186612 | orchestrator | 2025-06-03 15:28:59.186626 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 15:28:59.186638 | orchestrator | 2025-06-03 15:28:59.186650 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:28:59.187358 | orchestrator | Tuesday 03 June 2025 15:28:59 +0000 (0:00:01.221) 0:00:02.547 ********** 2025-06-03 15:29:04.055283 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:29:04.056242 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:29:04.056273 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:29:04.056278 | orchestrator | ok: [testbed-manager] 2025-06-03 15:29:04.057899 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:29:04.058810 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:29:04.058824 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:04.062387 | orchestrator | 2025-06-03 15:29:04.062630 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-03 15:29:04.062954 | orchestrator | 2025-06-03 15:29:04.063417 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-03 15:29:04.063926 | orchestrator | Tuesday 03 June 2025 15:29:04 +0000 (0:00:04.874) 0:00:07.422 ********** 2025-06-03 15:29:04.217404 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:29:04.296044 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:29:04.374469 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:29:04.454552 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:29:04.535638 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:29:04.586222 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:29:04.588238 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:04.588363 | orchestrator | 2025-06-03 15:29:04.589647 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:29:04.589997 | orchestrator | 2025-06-03 15:29:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:29:04.590376 | orchestrator | 2025-06-03 15:29:04 | INFO  | Please wait and do not abort execution. 2025-06-03 15:29:04.591113 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:04.592268 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:04.592927 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:04.593145 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:04.593684 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:04.594103 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:04.594587 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:04.594959 | orchestrator | 2025-06-03 15:29:04.595291 | orchestrator | 2025-06-03 15:29:04.595773 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:29:04.596194 | orchestrator | Tuesday 03 June 2025 15:29:04 +0000 (0:00:00.530) 0:00:07.952 ********** 2025-06-03 15:29:04.596728 | orchestrator | =============================================================================== 2025-06-03 15:29:04.597161 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.87s 2025-06-03 15:29:04.597603 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-06-03 15:29:04.598088 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.04s 2025-06-03 15:29:04.598411 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-06-03 15:29:05.308303 | orchestrator | 2025-06-03 15:29:05.311128 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Jun 3 15:29:05 UTC 2025 2025-06-03 15:29:05.311167 | orchestrator | 2025-06-03 15:29:07.196017 | orchestrator | 2025-06-03 15:29:07 | INFO  | Collection nutshell is prepared for execution 2025-06-03 15:29:07.196114 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [0] - dotfiles 2025-06-03 15:29:07.204044 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:29:07.204211 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:29:07.204629 | orchestrator | Registering Redlock._release_script 2025-06-03 15:29:07.210353 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [0] - homer 2025-06-03 15:29:07.210593 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [0] - netdata 2025-06-03 15:29:07.210616 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [0] - openstackclient 2025-06-03 15:29:07.210628 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [0] - phpmyadmin 2025-06-03 15:29:07.210668 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [0] - common 2025-06-03 15:29:07.213355 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [1] -- loadbalancer 2025-06-03 15:29:07.213947 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [2] --- opensearch 2025-06-03 15:29:07.213966 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [2] --- mariadb-ng 2025-06-03 15:29:07.213977 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [3] ---- horizon 2025-06-03 15:29:07.214208 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [3] ---- keystone 2025-06-03 15:29:07.214228 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [4] ----- neutron 2025-06-03 15:29:07.214239 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [5] ------ wait-for-nova 2025-06-03 15:29:07.214663 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [5] ------ octavia 2025-06-03 15:29:07.215430 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [4] ----- barbican 2025-06-03 15:29:07.215469 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [4] ----- designate 2025-06-03 15:29:07.215577 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [4] ----- ironic 2025-06-03 15:29:07.216235 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [4] ----- placement 2025-06-03 15:29:07.216254 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [4] ----- magnum 2025-06-03 15:29:07.216746 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [1] -- openvswitch 2025-06-03 15:29:07.216831 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [2] --- ovn 2025-06-03 15:29:07.217356 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [1] -- memcached 2025-06-03 15:29:07.217377 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [1] -- redis 2025-06-03 15:29:07.217718 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [1] -- rabbitmq-ng 2025-06-03 15:29:07.217911 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [0] - kubernetes 2025-06-03 15:29:07.220085 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [1] -- kubeconfig 2025-06-03 15:29:07.220128 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [1] -- copy-kubeconfig 2025-06-03 15:29:07.220471 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [0] - ceph 2025-06-03 15:29:07.222639 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [1] -- ceph-pools 2025-06-03 15:29:07.222669 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [2] --- copy-ceph-keys 2025-06-03 15:29:07.222680 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [3] ---- cephclient 2025-06-03 15:29:07.222755 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-03 15:29:07.222770 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [4] ----- wait-for-keystone 2025-06-03 15:29:07.222957 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-03 15:29:07.222984 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [5] ------ glance 2025-06-03 15:29:07.222995 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [5] ------ cinder 2025-06-03 15:29:07.223153 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [5] ------ nova 2025-06-03 15:29:07.223358 | orchestrator | 2025-06-03 15:29:07 | INFO  | A [4] ----- prometheus 2025-06-03 15:29:07.223377 | orchestrator | 2025-06-03 15:29:07 | INFO  | D [5] ------ grafana 2025-06-03 15:29:07.455741 | orchestrator | 2025-06-03 15:29:07 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-03 15:29:07.455816 | orchestrator | 2025-06-03 15:29:07 | INFO  | Tasks are running in the background 2025-06-03 15:29:10.084997 | orchestrator | 2025-06-03 15:29:10 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-03 15:29:12.217844 | orchestrator | 2025-06-03 15:29:12 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:12.220741 | orchestrator | 2025-06-03 15:29:12 | INFO  | Task c5829c38-5fe4-4e03-a8ab-390d1071764c is in state STARTED 2025-06-03 15:29:12.220826 | orchestrator | 2025-06-03 15:29:12 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:12.223382 | orchestrator | 2025-06-03 15:29:12 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:12.223527 | orchestrator | 2025-06-03 15:29:12 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:12.226406 | orchestrator | 2025-06-03 15:29:12 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:12.226448 | orchestrator | 2025-06-03 15:29:12 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:12.226462 | orchestrator | 2025-06-03 15:29:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:15.288295 | orchestrator | 2025-06-03 15:29:15 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:15.288538 | orchestrator | 2025-06-03 15:29:15 | INFO  | Task c5829c38-5fe4-4e03-a8ab-390d1071764c is in state STARTED 2025-06-03 15:29:15.289845 | orchestrator | 2025-06-03 15:29:15 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:15.294446 | orchestrator | 2025-06-03 15:29:15 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:15.295065 | orchestrator | 2025-06-03 15:29:15 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:15.295789 | orchestrator | 2025-06-03 15:29:15 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:15.296518 | orchestrator | 2025-06-03 15:29:15 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:15.296537 | orchestrator | 2025-06-03 15:29:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:18.339895 | orchestrator | 2025-06-03 15:29:18 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:18.340001 | orchestrator | 2025-06-03 15:29:18 | INFO  | Task c5829c38-5fe4-4e03-a8ab-390d1071764c is in state STARTED 2025-06-03 15:29:18.340015 | orchestrator | 2025-06-03 15:29:18 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:18.340027 | orchestrator | 2025-06-03 15:29:18 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:18.340038 | orchestrator | 2025-06-03 15:29:18 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:18.340049 | orchestrator | 2025-06-03 15:29:18 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:18.340060 | orchestrator | 2025-06-03 15:29:18 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:18.340071 | orchestrator | 2025-06-03 15:29:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:21.395231 | orchestrator | 2025-06-03 15:29:21 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:21.395404 | orchestrator | 2025-06-03 15:29:21 | INFO  | Task c5829c38-5fe4-4e03-a8ab-390d1071764c is in state STARTED 2025-06-03 15:29:21.400681 | orchestrator | 2025-06-03 15:29:21 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:21.402785 | orchestrator | 2025-06-03 15:29:21 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:21.403215 | orchestrator | 2025-06-03 15:29:21 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:21.403695 | orchestrator | 2025-06-03 15:29:21 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:21.404150 | orchestrator | 2025-06-03 15:29:21 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:21.404221 | orchestrator | 2025-06-03 15:29:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:24.456115 | orchestrator | 2025-06-03 15:29:24 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:24.456224 | orchestrator | 2025-06-03 15:29:24 | INFO  | Task c5829c38-5fe4-4e03-a8ab-390d1071764c is in state STARTED 2025-06-03 15:29:24.456240 | orchestrator | 2025-06-03 15:29:24 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:24.456329 | orchestrator | 2025-06-03 15:29:24 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:24.456741 | orchestrator | 2025-06-03 15:29:24 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:24.457315 | orchestrator | 2025-06-03 15:29:24 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:24.461854 | orchestrator | 2025-06-03 15:29:24 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:24.461890 | orchestrator | 2025-06-03 15:29:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:27.497712 | orchestrator | 2025-06-03 15:29:27 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:27.497844 | orchestrator | 2025-06-03 15:29:27 | INFO  | Task c5829c38-5fe4-4e03-a8ab-390d1071764c is in state STARTED 2025-06-03 15:29:27.498677 | orchestrator | 2025-06-03 15:29:27 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:27.499445 | orchestrator | 2025-06-03 15:29:27 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:27.501011 | orchestrator | 2025-06-03 15:29:27 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:27.501477 | orchestrator | 2025-06-03 15:29:27 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:27.503775 | orchestrator | 2025-06-03 15:29:27 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:27.503846 | orchestrator | 2025-06-03 15:29:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:30.552977 | orchestrator | 2025-06-03 15:29:30 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:30.554108 | orchestrator | 2025-06-03 15:29:30 | INFO  | Task c5829c38-5fe4-4e03-a8ab-390d1071764c is in state STARTED 2025-06-03 15:29:30.559104 | orchestrator | 2025-06-03 15:29:30 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:30.563332 | orchestrator | 2025-06-03 15:29:30 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:30.565495 | orchestrator | 2025-06-03 15:29:30 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:30.567599 | orchestrator | 2025-06-03 15:29:30 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:30.569144 | orchestrator | 2025-06-03 15:29:30 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:30.571269 | orchestrator | 2025-06-03 15:29:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:33.614769 | orchestrator | 2025-06-03 15:29:33 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:33.614893 | orchestrator | 2025-06-03 15:29:33 | INFO  | Task c5829c38-5fe4-4e03-a8ab-390d1071764c is in state STARTED 2025-06-03 15:29:33.615017 | orchestrator | 2025-06-03 15:29:33 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:33.619413 | orchestrator | 2025-06-03 15:29:33 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:33.619459 | orchestrator | 2025-06-03 15:29:33 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:33.619618 | orchestrator | 2025-06-03 15:29:33 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:33.622460 | orchestrator | 2025-06-03 15:29:33 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:33.622493 | orchestrator | 2025-06-03 15:29:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:36.683627 | orchestrator | 2025-06-03 15:29:36 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:36.685962 | orchestrator | 2025-06-03 15:29:36 | INFO  | Task c5829c38-5fe4-4e03-a8ab-390d1071764c is in state STARTED 2025-06-03 15:29:36.687365 | orchestrator | 2025-06-03 15:29:36 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:36.688829 | orchestrator | 2025-06-03 15:29:36 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:36.690652 | orchestrator | 2025-06-03 15:29:36 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:36.691866 | orchestrator | 2025-06-03 15:29:36 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:36.692989 | orchestrator | 2025-06-03 15:29:36 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:36.693117 | orchestrator | 2025-06-03 15:29:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:39.747303 | orchestrator | 2025-06-03 15:29:39 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:39.747706 | orchestrator | 2025-06-03 15:29:39.747765 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-03 15:29:39.747778 | orchestrator | 2025-06-03 15:29:39.747790 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-03 15:29:39.747801 | orchestrator | Tuesday 03 June 2025 15:29:19 +0000 (0:00:00.758) 0:00:00.758 ********** 2025-06-03 15:29:39.747813 | orchestrator | changed: [testbed-manager] 2025-06-03 15:29:39.747825 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:29:39.747836 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:29:39.747847 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:29:39.747858 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:29:39.747869 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:29:39.747880 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:29:39.747891 | orchestrator | 2025-06-03 15:29:39.747902 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-03 15:29:39.747914 | orchestrator | Tuesday 03 June 2025 15:29:23 +0000 (0:00:04.466) 0:00:05.225 ********** 2025-06-03 15:29:39.747926 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-03 15:29:39.747937 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-03 15:29:39.747948 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-03 15:29:39.747959 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-03 15:29:39.747977 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-03 15:29:39.747989 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-03 15:29:39.748025 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-03 15:29:39.748037 | orchestrator | 2025-06-03 15:29:39.748048 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-03 15:29:39.748059 | orchestrator | Tuesday 03 June 2025 15:29:26 +0000 (0:00:02.360) 0:00:07.585 ********** 2025-06-03 15:29:39.748074 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:24.993663', 'end': '2025-06-03 15:29:24.997048', 'delta': '0:00:00.003385', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:29:39.748090 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:25.036373', 'end': '2025-06-03 15:29:25.043999', 'delta': '0:00:00.007626', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:29:39.748101 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:25.052509', 'end': '2025-06-03 15:29:25.061370', 'delta': '0:00:00.008861', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:29:39.748128 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:25.434425', 'end': '2025-06-03 15:29:25.443647', 'delta': '0:00:00.009222', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:29:39.748146 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:25.660659', 'end': '2025-06-03 15:29:25.667490', 'delta': '0:00:00.006831', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:29:39.748172 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:25.779489', 'end': '2025-06-03 15:29:25.784475', 'delta': '0:00:00.004986', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:29:39.748184 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:25.919171', 'end': '2025-06-03 15:29:25.928052', 'delta': '0:00:00.008881', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:29:39.748196 | orchestrator | 2025-06-03 15:29:39.748207 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-03 15:29:39.748218 | orchestrator | Tuesday 03 June 2025 15:29:29 +0000 (0:00:02.897) 0:00:10.482 ********** 2025-06-03 15:29:39.748229 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-03 15:29:39.748240 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-03 15:29:39.748251 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-03 15:29:39.748262 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-03 15:29:39.748272 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-03 15:29:39.748283 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-03 15:29:39.748294 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-03 15:29:39.748305 | orchestrator | 2025-06-03 15:29:39.748316 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-03 15:29:39.748326 | orchestrator | Tuesday 03 June 2025 15:29:31 +0000 (0:00:02.313) 0:00:12.795 ********** 2025-06-03 15:29:39.748337 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-03 15:29:39.748348 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-03 15:29:39.748359 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-03 15:29:39.748370 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-03 15:29:39.748381 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-03 15:29:39.748392 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-03 15:29:39.748402 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-03 15:29:39.748413 | orchestrator | 2025-06-03 15:29:39.748424 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:29:39.748442 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:29:39.748462 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:29:39.748473 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:29:39.748485 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:29:39.748496 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:29:39.748506 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:29:39.748518 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:29:39.748568 | orchestrator | 2025-06-03 15:29:39.748579 | orchestrator | 2025-06-03 15:29:39.748590 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:29:39.748601 | orchestrator | Tuesday 03 June 2025 15:29:36 +0000 (0:00:04.742) 0:00:17.538 ********** 2025-06-03 15:29:39.748612 | orchestrator | =============================================================================== 2025-06-03 15:29:39.748623 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.74s 2025-06-03 15:29:39.748635 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.47s 2025-06-03 15:29:39.748645 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.90s 2025-06-03 15:29:39.748656 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.36s 2025-06-03 15:29:39.748667 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.31s 2025-06-03 15:29:39.749045 | orchestrator | 2025-06-03 15:29:39 | INFO  | Task c5829c38-5fe4-4e03-a8ab-390d1071764c is in state SUCCESS 2025-06-03 15:29:39.749444 | orchestrator | 2025-06-03 15:29:39 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:39.755382 | orchestrator | 2025-06-03 15:29:39 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:39.756489 | orchestrator | 2025-06-03 15:29:39 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:39.762734 | orchestrator | 2025-06-03 15:29:39 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:39.766003 | orchestrator | 2025-06-03 15:29:39 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:39.766434 | orchestrator | 2025-06-03 15:29:39 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:29:39.766504 | orchestrator | 2025-06-03 15:29:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:42.809047 | orchestrator | 2025-06-03 15:29:42 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:42.809140 | orchestrator | 2025-06-03 15:29:42 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:42.809314 | orchestrator | 2025-06-03 15:29:42 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:42.809519 | orchestrator | 2025-06-03 15:29:42 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:42.814335 | orchestrator | 2025-06-03 15:29:42 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:42.814387 | orchestrator | 2025-06-03 15:29:42 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:42.814393 | orchestrator | 2025-06-03 15:29:42 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:29:42.814399 | orchestrator | 2025-06-03 15:29:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:45.911806 | orchestrator | 2025-06-03 15:29:45 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:45.915281 | orchestrator | 2025-06-03 15:29:45 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:45.917452 | orchestrator | 2025-06-03 15:29:45 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:45.917482 | orchestrator | 2025-06-03 15:29:45 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:45.921412 | orchestrator | 2025-06-03 15:29:45 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:45.924721 | orchestrator | 2025-06-03 15:29:45 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:45.924769 | orchestrator | 2025-06-03 15:29:45 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:29:45.924816 | orchestrator | 2025-06-03 15:29:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:48.982161 | orchestrator | 2025-06-03 15:29:48 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:48.983007 | orchestrator | 2025-06-03 15:29:48 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:48.985168 | orchestrator | 2025-06-03 15:29:48 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:48.986300 | orchestrator | 2025-06-03 15:29:48 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:48.986498 | orchestrator | 2025-06-03 15:29:48 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:48.990674 | orchestrator | 2025-06-03 15:29:48 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:48.990750 | orchestrator | 2025-06-03 15:29:48 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:29:48.990767 | orchestrator | 2025-06-03 15:29:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:52.033142 | orchestrator | 2025-06-03 15:29:52 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:52.033855 | orchestrator | 2025-06-03 15:29:52 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:52.035299 | orchestrator | 2025-06-03 15:29:52 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:52.036287 | orchestrator | 2025-06-03 15:29:52 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:52.037378 | orchestrator | 2025-06-03 15:29:52 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:52.039758 | orchestrator | 2025-06-03 15:29:52 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:52.040836 | orchestrator | 2025-06-03 15:29:52 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:29:52.040860 | orchestrator | 2025-06-03 15:29:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:55.105046 | orchestrator | 2025-06-03 15:29:55 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state STARTED 2025-06-03 15:29:55.110910 | orchestrator | 2025-06-03 15:29:55 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:55.116440 | orchestrator | 2025-06-03 15:29:55 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:55.118260 | orchestrator | 2025-06-03 15:29:55 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:55.123965 | orchestrator | 2025-06-03 15:29:55 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:55.127253 | orchestrator | 2025-06-03 15:29:55 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:55.134705 | orchestrator | 2025-06-03 15:29:55 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:29:55.134737 | orchestrator | 2025-06-03 15:29:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:58.217750 | orchestrator | 2025-06-03 15:29:58 | INFO  | Task f651619b-e1b1-4bc6-b38a-0b9b9f65e473 is in state SUCCESS 2025-06-03 15:29:58.218521 | orchestrator | 2025-06-03 15:29:58 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:29:58.218603 | orchestrator | 2025-06-03 15:29:58 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:29:58.220778 | orchestrator | 2025-06-03 15:29:58 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:29:58.220842 | orchestrator | 2025-06-03 15:29:58 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:29:58.220852 | orchestrator | 2025-06-03 15:29:58 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:29:58.221861 | orchestrator | 2025-06-03 15:29:58 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:29:58.221909 | orchestrator | 2025-06-03 15:29:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:01.266897 | orchestrator | 2025-06-03 15:30:01 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:30:01.271183 | orchestrator | 2025-06-03 15:30:01 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:30:01.274947 | orchestrator | 2025-06-03 15:30:01 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:01.274992 | orchestrator | 2025-06-03 15:30:01 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:01.277030 | orchestrator | 2025-06-03 15:30:01 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:01.282368 | orchestrator | 2025-06-03 15:30:01 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:01.282422 | orchestrator | 2025-06-03 15:30:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:04.325819 | orchestrator | 2025-06-03 15:30:04 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:30:04.332100 | orchestrator | 2025-06-03 15:30:04 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:30:04.333612 | orchestrator | 2025-06-03 15:30:04 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:04.339688 | orchestrator | 2025-06-03 15:30:04 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:04.339735 | orchestrator | 2025-06-03 15:30:04 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:04.339744 | orchestrator | 2025-06-03 15:30:04 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:04.339767 | orchestrator | 2025-06-03 15:30:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:07.401255 | orchestrator | 2025-06-03 15:30:07 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state STARTED 2025-06-03 15:30:07.408603 | orchestrator | 2025-06-03 15:30:07 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:30:07.415273 | orchestrator | 2025-06-03 15:30:07 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:07.415341 | orchestrator | 2025-06-03 15:30:07 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:07.418439 | orchestrator | 2025-06-03 15:30:07 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:07.424566 | orchestrator | 2025-06-03 15:30:07 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:07.424641 | orchestrator | 2025-06-03 15:30:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:10.470515 | orchestrator | 2025-06-03 15:30:10 | INFO  | Task c35925de-2019-42fc-8c07-47656e4e2739 is in state SUCCESS 2025-06-03 15:30:10.472837 | orchestrator | 2025-06-03 15:30:10 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:30:10.476147 | orchestrator | 2025-06-03 15:30:10 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:10.477466 | orchestrator | 2025-06-03 15:30:10 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:10.479133 | orchestrator | 2025-06-03 15:30:10 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:10.482239 | orchestrator | 2025-06-03 15:30:10 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:10.482270 | orchestrator | 2025-06-03 15:30:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:13.531019 | orchestrator | 2025-06-03 15:30:13 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:30:13.538508 | orchestrator | 2025-06-03 15:30:13 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:13.541806 | orchestrator | 2025-06-03 15:30:13 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:13.542290 | orchestrator | 2025-06-03 15:30:13 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:13.543857 | orchestrator | 2025-06-03 15:30:13 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:13.544305 | orchestrator | 2025-06-03 15:30:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:16.594286 | orchestrator | 2025-06-03 15:30:16 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:30:16.596944 | orchestrator | 2025-06-03 15:30:16 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:16.597814 | orchestrator | 2025-06-03 15:30:16 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:16.602162 | orchestrator | 2025-06-03 15:30:16 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:16.603107 | orchestrator | 2025-06-03 15:30:16 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:16.603645 | orchestrator | 2025-06-03 15:30:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:19.666536 | orchestrator | 2025-06-03 15:30:19 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state STARTED 2025-06-03 15:30:19.668494 | orchestrator | 2025-06-03 15:30:19 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:19.670463 | orchestrator | 2025-06-03 15:30:19 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:19.671742 | orchestrator | 2025-06-03 15:30:19 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:19.673650 | orchestrator | 2025-06-03 15:30:19 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:19.673681 | orchestrator | 2025-06-03 15:30:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:22.728860 | orchestrator | 2025-06-03 15:30:22.728950 | orchestrator | 2025-06-03 15:30:22.728959 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-03 15:30:22.728967 | orchestrator | 2025-06-03 15:30:22.728974 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-03 15:30:22.728982 | orchestrator | Tuesday 03 June 2025 15:29:20 +0000 (0:00:01.068) 0:00:01.068 ********** 2025-06-03 15:30:22.728989 | orchestrator | ok: [testbed-manager] => { 2025-06-03 15:30:22.728999 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-03 15:30:22.729007 | orchestrator | } 2025-06-03 15:30:22.729016 | orchestrator | 2025-06-03 15:30:22.729023 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-03 15:30:22.729030 | orchestrator | Tuesday 03 June 2025 15:29:21 +0000 (0:00:00.630) 0:00:01.699 ********** 2025-06-03 15:30:22.729037 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:22.729044 | orchestrator | 2025-06-03 15:30:22.729051 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-03 15:30:22.729058 | orchestrator | Tuesday 03 June 2025 15:29:23 +0000 (0:00:01.986) 0:00:03.686 ********** 2025-06-03 15:30:22.729065 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-03 15:30:22.729073 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-03 15:30:22.729080 | orchestrator | 2025-06-03 15:30:22.729086 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-03 15:30:22.729093 | orchestrator | Tuesday 03 June 2025 15:29:24 +0000 (0:00:01.665) 0:00:05.351 ********** 2025-06-03 15:30:22.729100 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.729106 | orchestrator | 2025-06-03 15:30:22.729113 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-03 15:30:22.729119 | orchestrator | Tuesday 03 June 2025 15:29:27 +0000 (0:00:02.611) 0:00:07.963 ********** 2025-06-03 15:30:22.729124 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.729128 | orchestrator | 2025-06-03 15:30:22.729132 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-03 15:30:22.729136 | orchestrator | Tuesday 03 June 2025 15:29:28 +0000 (0:00:01.408) 0:00:09.371 ********** 2025-06-03 15:30:22.729141 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-03 15:30:22.729145 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:22.729149 | orchestrator | 2025-06-03 15:30:22.729153 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-03 15:30:22.729157 | orchestrator | Tuesday 03 June 2025 15:29:53 +0000 (0:00:24.567) 0:00:33.939 ********** 2025-06-03 15:30:22.729161 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.729164 | orchestrator | 2025-06-03 15:30:22.729168 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:30:22.729173 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:22.729178 | orchestrator | 2025-06-03 15:30:22.729182 | orchestrator | 2025-06-03 15:30:22.729186 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:30:22.729190 | orchestrator | Tuesday 03 June 2025 15:29:55 +0000 (0:00:01.735) 0:00:35.675 ********** 2025-06-03 15:30:22.729207 | orchestrator | =============================================================================== 2025-06-03 15:30:22.729211 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.57s 2025-06-03 15:30:22.729215 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.61s 2025-06-03 15:30:22.729219 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.99s 2025-06-03 15:30:22.729223 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.74s 2025-06-03 15:30:22.729227 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.67s 2025-06-03 15:30:22.729231 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.41s 2025-06-03 15:30:22.729235 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.63s 2025-06-03 15:30:22.729239 | orchestrator | 2025-06-03 15:30:22.729243 | orchestrator | 2025-06-03 15:30:22.729246 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-03 15:30:22.729250 | orchestrator | 2025-06-03 15:30:22.729254 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-03 15:30:22.729258 | orchestrator | Tuesday 03 June 2025 15:29:20 +0000 (0:00:00.753) 0:00:00.753 ********** 2025-06-03 15:30:22.729267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-03 15:30:22.729273 | orchestrator | 2025-06-03 15:30:22.729276 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-03 15:30:22.729280 | orchestrator | Tuesday 03 June 2025 15:29:21 +0000 (0:00:00.607) 0:00:01.360 ********** 2025-06-03 15:30:22.729284 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-03 15:30:22.729288 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-03 15:30:22.729292 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-03 15:30:22.729296 | orchestrator | 2025-06-03 15:30:22.729300 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-03 15:30:22.729304 | orchestrator | Tuesday 03 June 2025 15:29:23 +0000 (0:00:01.817) 0:00:03.178 ********** 2025-06-03 15:30:22.729308 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.729312 | orchestrator | 2025-06-03 15:30:22.729316 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-03 15:30:22.729320 | orchestrator | Tuesday 03 June 2025 15:29:25 +0000 (0:00:02.295) 0:00:05.473 ********** 2025-06-03 15:30:22.729341 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-03 15:30:22.729349 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:22.729355 | orchestrator | 2025-06-03 15:30:22.729362 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-03 15:30:22.729368 | orchestrator | Tuesday 03 June 2025 15:30:02 +0000 (0:00:36.769) 0:00:42.243 ********** 2025-06-03 15:30:22.729376 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.729382 | orchestrator | 2025-06-03 15:30:22.729389 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-03 15:30:22.729395 | orchestrator | Tuesday 03 June 2025 15:30:03 +0000 (0:00:01.379) 0:00:43.622 ********** 2025-06-03 15:30:22.729401 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:22.729407 | orchestrator | 2025-06-03 15:30:22.729413 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-03 15:30:22.729421 | orchestrator | Tuesday 03 June 2025 15:30:04 +0000 (0:00:01.120) 0:00:44.743 ********** 2025-06-03 15:30:22.729428 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.729435 | orchestrator | 2025-06-03 15:30:22.729442 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-03 15:30:22.729449 | orchestrator | Tuesday 03 June 2025 15:30:06 +0000 (0:00:02.226) 0:00:46.969 ********** 2025-06-03 15:30:22.729455 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.729467 | orchestrator | 2025-06-03 15:30:22.729473 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-03 15:30:22.729480 | orchestrator | Tuesday 03 June 2025 15:30:08 +0000 (0:00:01.232) 0:00:48.201 ********** 2025-06-03 15:30:22.729486 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.729493 | orchestrator | 2025-06-03 15:30:22.729500 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-03 15:30:22.729506 | orchestrator | Tuesday 03 June 2025 15:30:08 +0000 (0:00:00.647) 0:00:48.849 ********** 2025-06-03 15:30:22.729513 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:22.729520 | orchestrator | 2025-06-03 15:30:22.729526 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:30:22.729532 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:22.729539 | orchestrator | 2025-06-03 15:30:22.729545 | orchestrator | 2025-06-03 15:30:22.729594 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:30:22.729602 | orchestrator | Tuesday 03 June 2025 15:30:09 +0000 (0:00:00.330) 0:00:49.179 ********** 2025-06-03 15:30:22.729609 | orchestrator | =============================================================================== 2025-06-03 15:30:22.729616 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.77s 2025-06-03 15:30:22.729624 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.30s 2025-06-03 15:30:22.729631 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.23s 2025-06-03 15:30:22.729638 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.82s 2025-06-03 15:30:22.729646 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.38s 2025-06-03 15:30:22.729653 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.23s 2025-06-03 15:30:22.729661 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.12s 2025-06-03 15:30:22.729668 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.65s 2025-06-03 15:30:22.729675 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.61s 2025-06-03 15:30:22.729683 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.33s 2025-06-03 15:30:22.729690 | orchestrator | 2025-06-03 15:30:22.729697 | orchestrator | 2025-06-03 15:30:22.729705 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:30:22.729712 | orchestrator | 2025-06-03 15:30:22.729720 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:30:22.729727 | orchestrator | Tuesday 03 June 2025 15:29:19 +0000 (0:00:00.323) 0:00:00.323 ********** 2025-06-03 15:30:22.729734 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-03 15:30:22.729742 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-03 15:30:22.729749 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-03 15:30:22.729756 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-03 15:30:22.729767 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-03 15:30:22.729775 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-03 15:30:22.729782 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-03 15:30:22.729790 | orchestrator | 2025-06-03 15:30:22.729797 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-03 15:30:22.729804 | orchestrator | 2025-06-03 15:30:22.729811 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-03 15:30:22.729818 | orchestrator | Tuesday 03 June 2025 15:29:21 +0000 (0:00:01.775) 0:00:02.098 ********** 2025-06-03 15:30:22.729836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:30:22.729853 | orchestrator | 2025-06-03 15:30:22.729860 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-03 15:30:22.729868 | orchestrator | Tuesday 03 June 2025 15:29:23 +0000 (0:00:02.391) 0:00:04.489 ********** 2025-06-03 15:30:22.729875 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:30:22.729882 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:30:22.729890 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:22.729897 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:30:22.729904 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:30:22.729915 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:30:22.729922 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:30:22.729928 | orchestrator | 2025-06-03 15:30:22.729934 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-03 15:30:22.729941 | orchestrator | Tuesday 03 June 2025 15:29:26 +0000 (0:00:02.898) 0:00:07.388 ********** 2025-06-03 15:30:22.729947 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:22.729955 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:30:22.729962 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:30:22.729969 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:30:22.729976 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:30:22.729983 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:30:22.729990 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:30:22.729997 | orchestrator | 2025-06-03 15:30:22.730004 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-03 15:30:22.730011 | orchestrator | Tuesday 03 June 2025 15:29:30 +0000 (0:00:03.877) 0:00:11.266 ********** 2025-06-03 15:30:22.730116 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.730122 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:22.730126 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:22.730130 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:22.730134 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:22.730137 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:22.730141 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:22.730145 | orchestrator | 2025-06-03 15:30:22.730149 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-03 15:30:22.730153 | orchestrator | Tuesday 03 June 2025 15:29:33 +0000 (0:00:02.865) 0:00:14.131 ********** 2025-06-03 15:30:22.730157 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.730161 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:22.730165 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:22.730169 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:22.730172 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:22.730176 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:22.730180 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:22.730184 | orchestrator | 2025-06-03 15:30:22.730188 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-03 15:30:22.730192 | orchestrator | Tuesday 03 June 2025 15:29:43 +0000 (0:00:10.435) 0:00:24.566 ********** 2025-06-03 15:30:22.730196 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:22.730200 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:22.730204 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:22.730208 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:22.730260 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:22.730265 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:22.730269 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.730273 | orchestrator | 2025-06-03 15:30:22.730277 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-03 15:30:22.730281 | orchestrator | Tuesday 03 June 2025 15:29:59 +0000 (0:00:16.298) 0:00:40.864 ********** 2025-06-03 15:30:22.730286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:30:22.730302 | orchestrator | 2025-06-03 15:30:22.730306 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-03 15:30:22.730310 | orchestrator | Tuesday 03 June 2025 15:30:01 +0000 (0:00:01.910) 0:00:42.774 ********** 2025-06-03 15:30:22.730313 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-03 15:30:22.730318 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-03 15:30:22.730322 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-03 15:30:22.730325 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-03 15:30:22.730332 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-03 15:30:22.730339 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-03 15:30:22.730345 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-03 15:30:22.730351 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-03 15:30:22.730358 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-03 15:30:22.730364 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-03 15:30:22.730370 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-03 15:30:22.730377 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-03 15:30:22.730383 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-03 15:30:22.730389 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-03 15:30:22.730396 | orchestrator | 2025-06-03 15:30:22.730403 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-03 15:30:22.730407 | orchestrator | Tuesday 03 June 2025 15:30:08 +0000 (0:00:06.196) 0:00:48.971 ********** 2025-06-03 15:30:22.730438 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:22.730446 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:30:22.730453 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:30:22.730459 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:30:22.730466 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:30:22.730472 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:30:22.730479 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:30:22.730485 | orchestrator | 2025-06-03 15:30:22.730491 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-03 15:30:22.730498 | orchestrator | Tuesday 03 June 2025 15:30:09 +0000 (0:00:01.213) 0:00:50.185 ********** 2025-06-03 15:30:22.730505 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.730511 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:22.730518 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:22.730524 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:22.730531 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:22.730537 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:22.730543 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:22.730549 | orchestrator | 2025-06-03 15:30:22.730572 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-03 15:30:22.730586 | orchestrator | Tuesday 03 June 2025 15:30:11 +0000 (0:00:01.776) 0:00:51.961 ********** 2025-06-03 15:30:22.730592 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:22.730599 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:30:22.730605 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:30:22.730612 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:30:22.730618 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:30:22.730624 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:30:22.730630 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:30:22.730637 | orchestrator | 2025-06-03 15:30:22.730643 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-03 15:30:22.730650 | orchestrator | Tuesday 03 June 2025 15:30:12 +0000 (0:00:01.295) 0:00:53.257 ********** 2025-06-03 15:30:22.730657 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:30:22.730663 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:30:22.730669 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:22.730676 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:30:22.730689 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:30:22.730696 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:30:22.730702 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:30:22.730709 | orchestrator | 2025-06-03 15:30:22.730715 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-03 15:30:22.730731 | orchestrator | Tuesday 03 June 2025 15:30:14 +0000 (0:00:01.958) 0:00:55.215 ********** 2025-06-03 15:30:22.730737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-03 15:30:22.730746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:30:22.730753 | orchestrator | 2025-06-03 15:30:22.730760 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-03 15:30:22.730766 | orchestrator | Tuesday 03 June 2025 15:30:16 +0000 (0:00:01.754) 0:00:56.970 ********** 2025-06-03 15:30:22.730773 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.730779 | orchestrator | 2025-06-03 15:30:22.730786 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-03 15:30:22.730792 | orchestrator | Tuesday 03 June 2025 15:30:18 +0000 (0:00:01.984) 0:00:58.954 ********** 2025-06-03 15:30:22.730799 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:22.730805 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:22.730811 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:22.730818 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:22.730824 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:22.730831 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:22.730837 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:22.730843 | orchestrator | 2025-06-03 15:30:22.730850 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:30:22.730857 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:22.730863 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:22.730870 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:22.730876 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:22.730883 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:22.730889 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:22.730896 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:22.730902 | orchestrator | 2025-06-03 15:30:22.730909 | orchestrator | 2025-06-03 15:30:22.730915 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:30:22.730922 | orchestrator | Tuesday 03 June 2025 15:30:21 +0000 (0:00:03.890) 0:01:02.845 ********** 2025-06-03 15:30:22.730932 | orchestrator | =============================================================================== 2025-06-03 15:30:22.730939 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.30s 2025-06-03 15:30:22.730945 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.44s 2025-06-03 15:30:22.730951 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.20s 2025-06-03 15:30:22.730958 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.89s 2025-06-03 15:30:22.730969 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.88s 2025-06-03 15:30:22.730975 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.90s 2025-06-03 15:30:22.730982 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.86s 2025-06-03 15:30:22.730989 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.39s 2025-06-03 15:30:22.730995 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.98s 2025-06-03 15:30:22.731002 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.96s 2025-06-03 15:30:22.731008 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.91s 2025-06-03 15:30:22.731019 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.78s 2025-06-03 15:30:22.731026 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.78s 2025-06-03 15:30:22.731033 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.75s 2025-06-03 15:30:22.731039 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.30s 2025-06-03 15:30:22.731046 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.21s 2025-06-03 15:30:22.731053 | orchestrator | 2025-06-03 15:30:22 | INFO  | Task a7662855-8866-4bdd-875d-0a711d045339 is in state SUCCESS 2025-06-03 15:30:22.731060 | orchestrator | 2025-06-03 15:30:22 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:22.731067 | orchestrator | 2025-06-03 15:30:22 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:22.731889 | orchestrator | 2025-06-03 15:30:22 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:22.733857 | orchestrator | 2025-06-03 15:30:22 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:22.733981 | orchestrator | 2025-06-03 15:30:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:25.760783 | orchestrator | 2025-06-03 15:30:25 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:25.760857 | orchestrator | 2025-06-03 15:30:25 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:25.761127 | orchestrator | 2025-06-03 15:30:25 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:25.763502 | orchestrator | 2025-06-03 15:30:25 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:25.763530 | orchestrator | 2025-06-03 15:30:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:28.796535 | orchestrator | 2025-06-03 15:30:28 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:28.796692 | orchestrator | 2025-06-03 15:30:28 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:28.797890 | orchestrator | 2025-06-03 15:30:28 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:28.800295 | orchestrator | 2025-06-03 15:30:28 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:28.800366 | orchestrator | 2025-06-03 15:30:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:31.834634 | orchestrator | 2025-06-03 15:30:31 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:31.835268 | orchestrator | 2025-06-03 15:30:31 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:31.835374 | orchestrator | 2025-06-03 15:30:31 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:31.835950 | orchestrator | 2025-06-03 15:30:31 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:31.836033 | orchestrator | 2025-06-03 15:30:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:34.873548 | orchestrator | 2025-06-03 15:30:34 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:34.874758 | orchestrator | 2025-06-03 15:30:34 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:34.876204 | orchestrator | 2025-06-03 15:30:34 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:34.877240 | orchestrator | 2025-06-03 15:30:34 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:34.877290 | orchestrator | 2025-06-03 15:30:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:37.921776 | orchestrator | 2025-06-03 15:30:37 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:37.922674 | orchestrator | 2025-06-03 15:30:37 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:37.925143 | orchestrator | 2025-06-03 15:30:37 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:37.927642 | orchestrator | 2025-06-03 15:30:37 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:37.928211 | orchestrator | 2025-06-03 15:30:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:40.973079 | orchestrator | 2025-06-03 15:30:40 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:40.978393 | orchestrator | 2025-06-03 15:30:40 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:40.983550 | orchestrator | 2025-06-03 15:30:40 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:40.986468 | orchestrator | 2025-06-03 15:30:40 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:40.987211 | orchestrator | 2025-06-03 15:30:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:44.037924 | orchestrator | 2025-06-03 15:30:44 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:44.040107 | orchestrator | 2025-06-03 15:30:44 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:44.040965 | orchestrator | 2025-06-03 15:30:44 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:44.042087 | orchestrator | 2025-06-03 15:30:44 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:44.042115 | orchestrator | 2025-06-03 15:30:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:47.079646 | orchestrator | 2025-06-03 15:30:47 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:47.080174 | orchestrator | 2025-06-03 15:30:47 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:47.081084 | orchestrator | 2025-06-03 15:30:47 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:47.083429 | orchestrator | 2025-06-03 15:30:47 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:47.083784 | orchestrator | 2025-06-03 15:30:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:50.121522 | orchestrator | 2025-06-03 15:30:50 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:50.123290 | orchestrator | 2025-06-03 15:30:50 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:50.124824 | orchestrator | 2025-06-03 15:30:50 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:50.127268 | orchestrator | 2025-06-03 15:30:50 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:50.128533 | orchestrator | 2025-06-03 15:30:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:53.171158 | orchestrator | 2025-06-03 15:30:53 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:53.171847 | orchestrator | 2025-06-03 15:30:53 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:53.173143 | orchestrator | 2025-06-03 15:30:53 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:53.174419 | orchestrator | 2025-06-03 15:30:53 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:53.174470 | orchestrator | 2025-06-03 15:30:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:56.216897 | orchestrator | 2025-06-03 15:30:56 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:56.217035 | orchestrator | 2025-06-03 15:30:56 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:56.218806 | orchestrator | 2025-06-03 15:30:56 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:56.220403 | orchestrator | 2025-06-03 15:30:56 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:56.220446 | orchestrator | 2025-06-03 15:30:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:59.268909 | orchestrator | 2025-06-03 15:30:59 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:30:59.270866 | orchestrator | 2025-06-03 15:30:59 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:30:59.274307 | orchestrator | 2025-06-03 15:30:59 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:30:59.280841 | orchestrator | 2025-06-03 15:30:59 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state STARTED 2025-06-03 15:30:59.281059 | orchestrator | 2025-06-03 15:30:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:02.356775 | orchestrator | 2025-06-03 15:31:02 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:02.357093 | orchestrator | 2025-06-03 15:31:02 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:02.358488 | orchestrator | 2025-06-03 15:31:02 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:02.359299 | orchestrator | 2025-06-03 15:31:02 | INFO  | Task 04204c70-0b59-46bf-b93b-64717829e6ca is in state SUCCESS 2025-06-03 15:31:02.359342 | orchestrator | 2025-06-03 15:31:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:05.402300 | orchestrator | 2025-06-03 15:31:05 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:05.402704 | orchestrator | 2025-06-03 15:31:05 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:05.404809 | orchestrator | 2025-06-03 15:31:05 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:05.404895 | orchestrator | 2025-06-03 15:31:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:08.453172 | orchestrator | 2025-06-03 15:31:08 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:08.457741 | orchestrator | 2025-06-03 15:31:08 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:08.460291 | orchestrator | 2025-06-03 15:31:08 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:08.460350 | orchestrator | 2025-06-03 15:31:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:11.508380 | orchestrator | 2025-06-03 15:31:11 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:11.509155 | orchestrator | 2025-06-03 15:31:11 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:11.512304 | orchestrator | 2025-06-03 15:31:11 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:11.512370 | orchestrator | 2025-06-03 15:31:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:14.590705 | orchestrator | 2025-06-03 15:31:14 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:14.590823 | orchestrator | 2025-06-03 15:31:14 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:14.591762 | orchestrator | 2025-06-03 15:31:14 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:14.591836 | orchestrator | 2025-06-03 15:31:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:17.643962 | orchestrator | 2025-06-03 15:31:17 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:17.645719 | orchestrator | 2025-06-03 15:31:17 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:17.648381 | orchestrator | 2025-06-03 15:31:17 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:17.648410 | orchestrator | 2025-06-03 15:31:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:20.692187 | orchestrator | 2025-06-03 15:31:20 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:20.694114 | orchestrator | 2025-06-03 15:31:20 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:20.695708 | orchestrator | 2025-06-03 15:31:20 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:20.695740 | orchestrator | 2025-06-03 15:31:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:23.760044 | orchestrator | 2025-06-03 15:31:23 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:23.761382 | orchestrator | 2025-06-03 15:31:23 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:23.764063 | orchestrator | 2025-06-03 15:31:23 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:23.764113 | orchestrator | 2025-06-03 15:31:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:26.806722 | orchestrator | 2025-06-03 15:31:26 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:26.807104 | orchestrator | 2025-06-03 15:31:26 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:26.809478 | orchestrator | 2025-06-03 15:31:26 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:26.809520 | orchestrator | 2025-06-03 15:31:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:29.852412 | orchestrator | 2025-06-03 15:31:29 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:29.854234 | orchestrator | 2025-06-03 15:31:29 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:29.855707 | orchestrator | 2025-06-03 15:31:29 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:29.855725 | orchestrator | 2025-06-03 15:31:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:32.898286 | orchestrator | 2025-06-03 15:31:32 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:32.899682 | orchestrator | 2025-06-03 15:31:32 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:32.901268 | orchestrator | 2025-06-03 15:31:32 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:32.901320 | orchestrator | 2025-06-03 15:31:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:35.947328 | orchestrator | 2025-06-03 15:31:35 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:35.948681 | orchestrator | 2025-06-03 15:31:35 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:35.950229 | orchestrator | 2025-06-03 15:31:35 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:35.950270 | orchestrator | 2025-06-03 15:31:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:38.984389 | orchestrator | 2025-06-03 15:31:38 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:38.984569 | orchestrator | 2025-06-03 15:31:38 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:38.985388 | orchestrator | 2025-06-03 15:31:38 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:38.985425 | orchestrator | 2025-06-03 15:31:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:42.029205 | orchestrator | 2025-06-03 15:31:42 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:42.029328 | orchestrator | 2025-06-03 15:31:42 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:42.029940 | orchestrator | 2025-06-03 15:31:42 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:42.029969 | orchestrator | 2025-06-03 15:31:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:45.107574 | orchestrator | 2025-06-03 15:31:45 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:45.113649 | orchestrator | 2025-06-03 15:31:45 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:45.124745 | orchestrator | 2025-06-03 15:31:45 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:45.124821 | orchestrator | 2025-06-03 15:31:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:48.154912 | orchestrator | 2025-06-03 15:31:48 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:48.156780 | orchestrator | 2025-06-03 15:31:48 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:48.158212 | orchestrator | 2025-06-03 15:31:48 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:48.158277 | orchestrator | 2025-06-03 15:31:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:51.208782 | orchestrator | 2025-06-03 15:31:51 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state STARTED 2025-06-03 15:31:51.209281 | orchestrator | 2025-06-03 15:31:51 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:51.213930 | orchestrator | 2025-06-03 15:31:51 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:51.216765 | orchestrator | 2025-06-03 15:31:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:54.273005 | orchestrator | 2025-06-03 15:31:54 | INFO  | Task 84fd7306-1838-4d0c-871f-329a3e427060 is in state SUCCESS 2025-06-03 15:31:54.274604 | orchestrator | 2025-06-03 15:31:54.274721 | orchestrator | 2025-06-03 15:31:54.274737 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-03 15:31:54.274750 | orchestrator | 2025-06-03 15:31:54.274762 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-03 15:31:54.274774 | orchestrator | Tuesday 03 June 2025 15:29:42 +0000 (0:00:00.301) 0:00:00.301 ********** 2025-06-03 15:31:54.274786 | orchestrator | ok: [testbed-manager] 2025-06-03 15:31:54.274798 | orchestrator | 2025-06-03 15:31:54.274810 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-03 15:31:54.274821 | orchestrator | Tuesday 03 June 2025 15:29:43 +0000 (0:00:00.812) 0:00:01.113 ********** 2025-06-03 15:31:54.274832 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-03 15:31:54.274843 | orchestrator | 2025-06-03 15:31:54.274854 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-03 15:31:54.274866 | orchestrator | Tuesday 03 June 2025 15:29:43 +0000 (0:00:00.798) 0:00:01.912 ********** 2025-06-03 15:31:54.274876 | orchestrator | changed: [testbed-manager] 2025-06-03 15:31:54.274887 | orchestrator | 2025-06-03 15:31:54.274898 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-03 15:31:54.274910 | orchestrator | Tuesday 03 June 2025 15:29:46 +0000 (0:00:02.396) 0:00:04.308 ********** 2025-06-03 15:31:54.274921 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-03 15:31:54.274932 | orchestrator | ok: [testbed-manager] 2025-06-03 15:31:54.274943 | orchestrator | 2025-06-03 15:31:54.274954 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-03 15:31:54.274964 | orchestrator | Tuesday 03 June 2025 15:30:46 +0000 (0:00:59.825) 0:01:04.134 ********** 2025-06-03 15:31:54.274975 | orchestrator | changed: [testbed-manager] 2025-06-03 15:31:54.274986 | orchestrator | 2025-06-03 15:31:54.274997 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:31:54.275009 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:31:54.275021 | orchestrator | 2025-06-03 15:31:54.275032 | orchestrator | 2025-06-03 15:31:54.275043 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:31:54.275054 | orchestrator | Tuesday 03 June 2025 15:30:59 +0000 (0:00:13.593) 0:01:17.727 ********** 2025-06-03 15:31:54.275065 | orchestrator | =============================================================================== 2025-06-03 15:31:54.275076 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 59.83s 2025-06-03 15:31:54.275087 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 13.59s 2025-06-03 15:31:54.275098 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.40s 2025-06-03 15:31:54.275109 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.81s 2025-06-03 15:31:54.275119 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.80s 2025-06-03 15:31:54.275130 | orchestrator | 2025-06-03 15:31:54.275141 | orchestrator | 2025-06-03 15:31:54.275154 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-03 15:31:54.275166 | orchestrator | 2025-06-03 15:31:54.275178 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-03 15:31:54.275190 | orchestrator | Tuesday 03 June 2025 15:29:12 +0000 (0:00:00.364) 0:00:00.364 ********** 2025-06-03 15:31:54.275203 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:31:54.275243 | orchestrator | 2025-06-03 15:31:54.275255 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-03 15:31:54.275268 | orchestrator | Tuesday 03 June 2025 15:29:14 +0000 (0:00:01.729) 0:00:02.094 ********** 2025-06-03 15:31:54.275280 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:31:54.275293 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:31:54.275305 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:31:54.275317 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:31:54.275329 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:31:54.275342 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:31:54.275354 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:31:54.275367 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:31:54.275379 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:31:54.275393 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:31:54.275406 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:31:54.275418 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:31:54.275430 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:31:54.275442 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:31:54.275455 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:31:54.275467 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:31:54.275496 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:31:54.275510 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:31:54.275521 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:31:54.275532 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:31:54.275543 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:31:54.275554 | orchestrator | 2025-06-03 15:31:54.275565 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-03 15:31:54.275575 | orchestrator | Tuesday 03 June 2025 15:29:18 +0000 (0:00:04.358) 0:00:06.453 ********** 2025-06-03 15:31:54.275587 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:31:54.275599 | orchestrator | 2025-06-03 15:31:54.275633 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-03 15:31:54.275646 | orchestrator | Tuesday 03 June 2025 15:29:20 +0000 (0:00:01.340) 0:00:07.793 ********** 2025-06-03 15:31:54.275673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.275699 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.275711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.275723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.275739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.275775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.275787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.275799 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.275824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.275836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.275848 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.275865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.275886 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.275909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.275922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.275940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.275970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.275990 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.276010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.276029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.276055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.276075 | orchestrator | 2025-06-03 15:31:54.276093 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-03 15:31:54.276112 | orchestrator | Tuesday 03 June 2025 15:29:25 +0000 (0:00:05.235) 0:00:13.028 ********** 2025-06-03 15:31:54.276125 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.276136 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276157 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.276196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276233 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:31:54.276261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.276293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.276339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276362 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:31:54.276373 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:31:54.276385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.276402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276431 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:31:54.276442 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:31:54.276454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.276472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276496 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:31:54.276507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.276519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276543 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:31:54.276554 | orchestrator | 2025-06-03 15:31:54.276565 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-03 15:31:54.276576 | orchestrator | Tuesday 03 June 2025 15:29:27 +0000 (0:00:01.573) 0:00:14.602 ********** 2025-06-03 15:31:54.276592 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.276690 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276732 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:31:54.276744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.276761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.276800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.276845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.277466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.277497 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:31:54.277509 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:31:54.277521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.277532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.277544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.277555 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:31:54.277574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.277587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.277598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.277674 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:31:54.277707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.277731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.277743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.277755 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:31:54.277767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:31:54.277778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.277789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.277801 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:31:54.277811 | orchestrator | 2025-06-03 15:31:54.277822 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-03 15:31:54.277834 | orchestrator | Tuesday 03 June 2025 15:29:29 +0000 (0:00:02.543) 0:00:17.145 ********** 2025-06-03 15:31:54.277845 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:31:54.277856 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:31:54.277867 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:31:54.277877 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:31:54.277888 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:31:54.277904 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:31:54.277915 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:31:54.277926 | orchestrator | 2025-06-03 15:31:54.277937 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-03 15:31:54.277948 | orchestrator | Tuesday 03 June 2025 15:29:30 +0000 (0:00:01.282) 0:00:18.428 ********** 2025-06-03 15:31:54.277961 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:31:54.277973 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:31:54.277985 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:31:54.277997 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:31:54.278009 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:31:54.278080 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:31:54.278099 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:31:54.278112 | orchestrator | 2025-06-03 15:31:54.278124 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-03 15:31:54.278137 | orchestrator | Tuesday 03 June 2025 15:29:31 +0000 (0:00:01.006) 0:00:19.435 ********** 2025-06-03 15:31:54.278165 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.278179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.278193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.278209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.278229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.278248 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.278480 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278547 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.278568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.278740 | orchestrator | 2025-06-03 15:31:54.278751 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-03 15:31:54.278762 | orchestrator | Tuesday 03 June 2025 15:29:38 +0000 (0:00:06.358) 0:00:25.793 ********** 2025-06-03 15:31:54.278774 | orchestrator | [WARNING]: Skipped 2025-06-03 15:31:54.278785 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-03 15:31:54.278796 | orchestrator | to this access issue: 2025-06-03 15:31:54.278807 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-03 15:31:54.278818 | orchestrator | directory 2025-06-03 15:31:54.278830 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:31:54.278840 | orchestrator | 2025-06-03 15:31:54.278851 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-03 15:31:54.278862 | orchestrator | Tuesday 03 June 2025 15:29:40 +0000 (0:00:02.055) 0:00:27.848 ********** 2025-06-03 15:31:54.278873 | orchestrator | [WARNING]: Skipped 2025-06-03 15:31:54.278884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-03 15:31:54.278895 | orchestrator | to this access issue: 2025-06-03 15:31:54.278906 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-03 15:31:54.278921 | orchestrator | directory 2025-06-03 15:31:54.278946 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:31:54.278967 | orchestrator | 2025-06-03 15:31:54.278984 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-03 15:31:54.279002 | orchestrator | Tuesday 03 June 2025 15:29:41 +0000 (0:00:00.917) 0:00:28.766 ********** 2025-06-03 15:31:54.279013 | orchestrator | [WARNING]: Skipped 2025-06-03 15:31:54.279024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-03 15:31:54.279035 | orchestrator | to this access issue: 2025-06-03 15:31:54.279046 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-03 15:31:54.279057 | orchestrator | directory 2025-06-03 15:31:54.279068 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:31:54.279078 | orchestrator | 2025-06-03 15:31:54.279096 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-03 15:31:54.279107 | orchestrator | Tuesday 03 June 2025 15:29:42 +0000 (0:00:00.858) 0:00:29.625 ********** 2025-06-03 15:31:54.279118 | orchestrator | [WARNING]: Skipped 2025-06-03 15:31:54.279129 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-03 15:31:54.279140 | orchestrator | to this access issue: 2025-06-03 15:31:54.279151 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-03 15:31:54.279161 | orchestrator | directory 2025-06-03 15:31:54.279172 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:31:54.279183 | orchestrator | 2025-06-03 15:31:54.279200 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-03 15:31:54.279219 | orchestrator | Tuesday 03 June 2025 15:29:42 +0000 (0:00:00.706) 0:00:30.331 ********** 2025-06-03 15:31:54.279258 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:31:54.279279 | orchestrator | changed: [testbed-manager] 2025-06-03 15:31:54.279291 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:31:54.279302 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:31:54.279325 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:31:54.279336 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:31:54.279347 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:31:54.279357 | orchestrator | 2025-06-03 15:31:54.279369 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-03 15:31:54.279380 | orchestrator | Tuesday 03 June 2025 15:29:48 +0000 (0:00:05.630) 0:00:35.961 ********** 2025-06-03 15:31:54.279391 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:31:54.279404 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:31:54.279415 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:31:54.279426 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:31:54.279437 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:31:54.279447 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:31:54.279458 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:31:54.279469 | orchestrator | 2025-06-03 15:31:54.279480 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-03 15:31:54.279491 | orchestrator | Tuesday 03 June 2025 15:29:50 +0000 (0:00:02.573) 0:00:38.535 ********** 2025-06-03 15:31:54.279502 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:31:54.279513 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:31:54.279524 | orchestrator | changed: [testbed-manager] 2025-06-03 15:31:54.279534 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:31:54.279545 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:31:54.279556 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:31:54.279567 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:31:54.279578 | orchestrator | 2025-06-03 15:31:54.279589 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-03 15:31:54.279600 | orchestrator | Tuesday 03 June 2025 15:29:54 +0000 (0:00:03.110) 0:00:41.645 ********** 2025-06-03 15:31:54.279634 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.279647 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.279665 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.279693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.279706 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.279717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.279730 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.279743 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.279755 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.279766 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.279788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.279808 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.279819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.279831 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.279842 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.279854 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.279865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.279876 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.279907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:31:54.279919 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.279930 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.279941 | orchestrator | 2025-06-03 15:31:54.279952 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-03 15:31:54.279963 | orchestrator | Tuesday 03 June 2025 15:29:57 +0000 (0:00:03.642) 0:00:45.287 ********** 2025-06-03 15:31:54.279976 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:31:54.279994 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:31:54.280013 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:31:54.280031 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:31:54.280049 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:31:54.280065 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:31:54.280075 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:31:54.280087 | orchestrator | 2025-06-03 15:31:54.280097 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-03 15:31:54.280110 | orchestrator | Tuesday 03 June 2025 15:30:01 +0000 (0:00:03.501) 0:00:48.789 ********** 2025-06-03 15:31:54.280129 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:31:54.280148 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:31:54.280166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:31:54.280185 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:31:54.280203 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:31:54.280215 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:31:54.280233 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:31:54.280252 | orchestrator | 2025-06-03 15:31:54.280263 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-03 15:31:54.280274 | orchestrator | Tuesday 03 June 2025 15:30:04 +0000 (0:00:03.557) 0:00:52.347 ********** 2025-06-03 15:31:54.280291 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.280303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.280323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.280336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.280347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280359 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.280409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.280434 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:31:54.280458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280516 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:31:54.280592 | orchestrator | 2025-06-03 15:31:54.280604 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-03 15:31:54.280642 | orchestrator | Tuesday 03 June 2025 15:30:08 +0000 (0:00:04.177) 0:00:56.524 ********** 2025-06-03 15:31:54.280660 | orchestrator | changed: [testbed-manager] 2025-06-03 15:31:54.280671 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:31:54.280682 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:31:54.280693 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:31:54.280703 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:31:54.280714 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:31:54.280725 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:31:54.280736 | orchestrator | 2025-06-03 15:31:54.280746 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-03 15:31:54.280757 | orchestrator | Tuesday 03 June 2025 15:30:10 +0000 (0:00:01.799) 0:00:58.323 ********** 2025-06-03 15:31:54.280768 | orchestrator | changed: [testbed-manager] 2025-06-03 15:31:54.280779 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:31:54.280789 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:31:54.280800 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:31:54.280810 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:31:54.280821 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:31:54.280832 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:31:54.280843 | orchestrator | 2025-06-03 15:31:54.280854 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:31:54.280865 | orchestrator | Tuesday 03 June 2025 15:30:11 +0000 (0:00:01.060) 0:00:59.384 ********** 2025-06-03 15:31:54.280876 | orchestrator | 2025-06-03 15:31:54.280887 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:31:54.280898 | orchestrator | Tuesday 03 June 2025 15:30:12 +0000 (0:00:00.180) 0:00:59.564 ********** 2025-06-03 15:31:54.280909 | orchestrator | 2025-06-03 15:31:54.280920 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:31:54.280931 | orchestrator | Tuesday 03 June 2025 15:30:12 +0000 (0:00:00.063) 0:00:59.628 ********** 2025-06-03 15:31:54.280941 | orchestrator | 2025-06-03 15:31:54.280953 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:31:54.280963 | orchestrator | Tuesday 03 June 2025 15:30:12 +0000 (0:00:00.062) 0:00:59.691 ********** 2025-06-03 15:31:54.280974 | orchestrator | 2025-06-03 15:31:54.280986 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:31:54.281003 | orchestrator | Tuesday 03 June 2025 15:30:12 +0000 (0:00:00.064) 0:00:59.755 ********** 2025-06-03 15:31:54.281015 | orchestrator | 2025-06-03 15:31:54.281026 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:31:54.281036 | orchestrator | Tuesday 03 June 2025 15:30:12 +0000 (0:00:00.072) 0:00:59.828 ********** 2025-06-03 15:31:54.281047 | orchestrator | 2025-06-03 15:31:54.281058 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:31:54.281070 | orchestrator | Tuesday 03 June 2025 15:30:12 +0000 (0:00:00.105) 0:00:59.934 ********** 2025-06-03 15:31:54.281081 | orchestrator | 2025-06-03 15:31:54.281091 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-03 15:31:54.281102 | orchestrator | Tuesday 03 June 2025 15:30:12 +0000 (0:00:00.113) 0:01:00.047 ********** 2025-06-03 15:31:54.281120 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:31:54.281132 | orchestrator | changed: [testbed-manager] 2025-06-03 15:31:54.281143 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:31:54.281154 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:31:54.281165 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:31:54.281175 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:31:54.281186 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:31:54.281197 | orchestrator | 2025-06-03 15:31:54.281207 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-03 15:31:54.281218 | orchestrator | Tuesday 03 June 2025 15:30:56 +0000 (0:00:43.566) 0:01:43.613 ********** 2025-06-03 15:31:54.281229 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:31:54.281240 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:31:54.281251 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:31:54.281269 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:31:54.281280 | orchestrator | changed: [testbed-manager] 2025-06-03 15:31:54.281291 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:31:54.281302 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:31:54.281312 | orchestrator | 2025-06-03 15:31:54.281323 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-03 15:31:54.281334 | orchestrator | Tuesday 03 June 2025 15:31:41 +0000 (0:00:45.387) 0:02:29.001 ********** 2025-06-03 15:31:54.281345 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:31:54.281356 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:31:54.281367 | orchestrator | ok: [testbed-manager] 2025-06-03 15:31:54.281378 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:31:54.281389 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:31:54.281400 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:31:54.281411 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:31:54.281422 | orchestrator | 2025-06-03 15:31:54.281432 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-03 15:31:54.281443 | orchestrator | Tuesday 03 June 2025 15:31:43 +0000 (0:00:02.236) 0:02:31.237 ********** 2025-06-03 15:31:54.281454 | orchestrator | changed: [testbed-manager] 2025-06-03 15:31:54.281465 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:31:54.281475 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:31:54.281486 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:31:54.281497 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:31:54.281508 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:31:54.281519 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:31:54.281530 | orchestrator | 2025-06-03 15:31:54.281540 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:31:54.281552 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:31:54.281563 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:31:54.281575 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:31:54.281586 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:31:54.281597 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:31:54.281628 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:31:54.281641 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:31:54.281652 | orchestrator | 2025-06-03 15:31:54.281663 | orchestrator | 2025-06-03 15:31:54.281674 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:31:54.281685 | orchestrator | Tuesday 03 June 2025 15:31:52 +0000 (0:00:09.277) 0:02:40.515 ********** 2025-06-03 15:31:54.281695 | orchestrator | =============================================================================== 2025-06-03 15:31:54.281706 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 45.39s 2025-06-03 15:31:54.281717 | orchestrator | common : Restart fluentd container ------------------------------------- 43.57s 2025-06-03 15:31:54.281727 | orchestrator | common : Restart cron container ----------------------------------------- 9.28s 2025-06-03 15:31:54.281738 | orchestrator | common : Copying over config.json files for services -------------------- 6.36s 2025-06-03 15:31:54.281749 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.63s 2025-06-03 15:31:54.281759 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.24s 2025-06-03 15:31:54.281777 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.36s 2025-06-03 15:31:54.281793 | orchestrator | common : Check common containers ---------------------------------------- 4.18s 2025-06-03 15:31:54.281805 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.64s 2025-06-03 15:31:54.281816 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.56s 2025-06-03 15:31:54.281828 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.50s 2025-06-03 15:31:54.281839 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.11s 2025-06-03 15:31:54.281849 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.57s 2025-06-03 15:31:54.281860 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.54s 2025-06-03 15:31:54.281877 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.24s 2025-06-03 15:31:54.281888 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.06s 2025-06-03 15:31:54.281899 | orchestrator | common : Creating log volume -------------------------------------------- 1.80s 2025-06-03 15:31:54.281909 | orchestrator | common : include_tasks -------------------------------------------------- 1.73s 2025-06-03 15:31:54.281920 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.57s 2025-06-03 15:31:54.281931 | orchestrator | common : include_tasks -------------------------------------------------- 1.34s 2025-06-03 15:31:54.282157 | orchestrator | 2025-06-03 15:31:54 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:54.282180 | orchestrator | 2025-06-03 15:31:54 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:54.282191 | orchestrator | 2025-06-03 15:31:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:57.327057 | orchestrator | 2025-06-03 15:31:57 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:31:57.327551 | orchestrator | 2025-06-03 15:31:57 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:31:57.328206 | orchestrator | 2025-06-03 15:31:57 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:31:57.329103 | orchestrator | 2025-06-03 15:31:57 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:31:57.331958 | orchestrator | 2025-06-03 15:31:57 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:31:57.332845 | orchestrator | 2025-06-03 15:31:57 | INFO  | Task 27989246-5944-49f3-b17b-3246b6f30946 is in state STARTED 2025-06-03 15:31:57.332883 | orchestrator | 2025-06-03 15:31:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:00.375954 | orchestrator | 2025-06-03 15:32:00 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:00.376065 | orchestrator | 2025-06-03 15:32:00 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:00.377075 | orchestrator | 2025-06-03 15:32:00 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:00.378417 | orchestrator | 2025-06-03 15:32:00 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:00.379409 | orchestrator | 2025-06-03 15:32:00 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:00.383059 | orchestrator | 2025-06-03 15:32:00 | INFO  | Task 27989246-5944-49f3-b17b-3246b6f30946 is in state STARTED 2025-06-03 15:32:00.383726 | orchestrator | 2025-06-03 15:32:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:03.444424 | orchestrator | 2025-06-03 15:32:03 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:03.446960 | orchestrator | 2025-06-03 15:32:03 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:03.448478 | orchestrator | 2025-06-03 15:32:03 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:03.453215 | orchestrator | 2025-06-03 15:32:03 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:03.454531 | orchestrator | 2025-06-03 15:32:03 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:03.458506 | orchestrator | 2025-06-03 15:32:03 | INFO  | Task 27989246-5944-49f3-b17b-3246b6f30946 is in state STARTED 2025-06-03 15:32:03.458552 | orchestrator | 2025-06-03 15:32:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:06.523717 | orchestrator | 2025-06-03 15:32:06 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:06.523823 | orchestrator | 2025-06-03 15:32:06 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:06.523858 | orchestrator | 2025-06-03 15:32:06 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:06.526200 | orchestrator | 2025-06-03 15:32:06 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:06.526227 | orchestrator | 2025-06-03 15:32:06 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:06.526238 | orchestrator | 2025-06-03 15:32:06 | INFO  | Task 27989246-5944-49f3-b17b-3246b6f30946 is in state STARTED 2025-06-03 15:32:06.526250 | orchestrator | 2025-06-03 15:32:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:09.583960 | orchestrator | 2025-06-03 15:32:09 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:09.584530 | orchestrator | 2025-06-03 15:32:09 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:09.585361 | orchestrator | 2025-06-03 15:32:09 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:09.589354 | orchestrator | 2025-06-03 15:32:09 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:09.590085 | orchestrator | 2025-06-03 15:32:09 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:09.593359 | orchestrator | 2025-06-03 15:32:09 | INFO  | Task 27989246-5944-49f3-b17b-3246b6f30946 is in state STARTED 2025-06-03 15:32:09.593388 | orchestrator | 2025-06-03 15:32:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:12.638892 | orchestrator | 2025-06-03 15:32:12 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:12.639561 | orchestrator | 2025-06-03 15:32:12 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:12.645841 | orchestrator | 2025-06-03 15:32:12 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:12.648412 | orchestrator | 2025-06-03 15:32:12 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:12.650731 | orchestrator | 2025-06-03 15:32:12 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:12.650911 | orchestrator | 2025-06-03 15:32:12 | INFO  | Task 27989246-5944-49f3-b17b-3246b6f30946 is in state STARTED 2025-06-03 15:32:12.651011 | orchestrator | 2025-06-03 15:32:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:15.694353 | orchestrator | 2025-06-03 15:32:15 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:15.694575 | orchestrator | 2025-06-03 15:32:15 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:15.698270 | orchestrator | 2025-06-03 15:32:15 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:15.701676 | orchestrator | 2025-06-03 15:32:15 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:15.702497 | orchestrator | 2025-06-03 15:32:15 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:15.705220 | orchestrator | 2025-06-03 15:32:15 | INFO  | Task 27989246-5944-49f3-b17b-3246b6f30946 is in state STARTED 2025-06-03 15:32:15.705267 | orchestrator | 2025-06-03 15:32:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:18.752575 | orchestrator | 2025-06-03 15:32:18 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:18.752800 | orchestrator | 2025-06-03 15:32:18 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:18.754604 | orchestrator | 2025-06-03 15:32:18 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:18.755584 | orchestrator | 2025-06-03 15:32:18 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:18.759591 | orchestrator | 2025-06-03 15:32:18 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:18.759661 | orchestrator | 2025-06-03 15:32:18 | INFO  | Task 27989246-5944-49f3-b17b-3246b6f30946 is in state SUCCESS 2025-06-03 15:32:18.759673 | orchestrator | 2025-06-03 15:32:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:21.802387 | orchestrator | 2025-06-03 15:32:21 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:21.802978 | orchestrator | 2025-06-03 15:32:21 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:21.804379 | orchestrator | 2025-06-03 15:32:21 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:21.804981 | orchestrator | 2025-06-03 15:32:21 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:21.805695 | orchestrator | 2025-06-03 15:32:21 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:21.806530 | orchestrator | 2025-06-03 15:32:21 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:21.806677 | orchestrator | 2025-06-03 15:32:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:24.849297 | orchestrator | 2025-06-03 15:32:24 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:24.850008 | orchestrator | 2025-06-03 15:32:24 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:24.851471 | orchestrator | 2025-06-03 15:32:24 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:24.854732 | orchestrator | 2025-06-03 15:32:24 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:24.856581 | orchestrator | 2025-06-03 15:32:24 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:24.859092 | orchestrator | 2025-06-03 15:32:24 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:24.859136 | orchestrator | 2025-06-03 15:32:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:27.916912 | orchestrator | 2025-06-03 15:32:27 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:27.919665 | orchestrator | 2025-06-03 15:32:27 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:27.922761 | orchestrator | 2025-06-03 15:32:27 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:27.925996 | orchestrator | 2025-06-03 15:32:27 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:27.929234 | orchestrator | 2025-06-03 15:32:27 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:27.932238 | orchestrator | 2025-06-03 15:32:27 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:27.932294 | orchestrator | 2025-06-03 15:32:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:30.977744 | orchestrator | 2025-06-03 15:32:30 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:30.977996 | orchestrator | 2025-06-03 15:32:30 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:30.978785 | orchestrator | 2025-06-03 15:32:30 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:30.979408 | orchestrator | 2025-06-03 15:32:30 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:30.980084 | orchestrator | 2025-06-03 15:32:30 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:30.980817 | orchestrator | 2025-06-03 15:32:30 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:30.980850 | orchestrator | 2025-06-03 15:32:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:34.016768 | orchestrator | 2025-06-03 15:32:34 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state STARTED 2025-06-03 15:32:34.018934 | orchestrator | 2025-06-03 15:32:34 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:34.023449 | orchestrator | 2025-06-03 15:32:34 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:34.024310 | orchestrator | 2025-06-03 15:32:34 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:34.025540 | orchestrator | 2025-06-03 15:32:34 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:34.028215 | orchestrator | 2025-06-03 15:32:34 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:34.028267 | orchestrator | 2025-06-03 15:32:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:37.064086 | orchestrator | 2025-06-03 15:32:37 | INFO  | Task 6f04412d-baf5-4192-87bf-6cb03c3c6c05 is in state SUCCESS 2025-06-03 15:32:37.064879 | orchestrator | 2025-06-03 15:32:37.064948 | orchestrator | 2025-06-03 15:32:37.064956 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:32:37.064963 | orchestrator | 2025-06-03 15:32:37.064969 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:32:37.064989 | orchestrator | Tuesday 03 June 2025 15:32:03 +0000 (0:00:00.749) 0:00:00.749 ********** 2025-06-03 15:32:37.064995 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:32:37.065001 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:32:37.065006 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:32:37.065012 | orchestrator | 2025-06-03 15:32:37.065018 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:32:37.065023 | orchestrator | Tuesday 03 June 2025 15:32:04 +0000 (0:00:00.730) 0:00:01.480 ********** 2025-06-03 15:32:37.065029 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-03 15:32:37.065054 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-03 15:32:37.065059 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-03 15:32:37.065064 | orchestrator | 2025-06-03 15:32:37.065070 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-03 15:32:37.065075 | orchestrator | 2025-06-03 15:32:37.065080 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-03 15:32:37.065085 | orchestrator | Tuesday 03 June 2025 15:32:05 +0000 (0:00:01.122) 0:00:02.603 ********** 2025-06-03 15:32:37.065090 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:32:37.065097 | orchestrator | 2025-06-03 15:32:37.065102 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-03 15:32:37.065107 | orchestrator | Tuesday 03 June 2025 15:32:06 +0000 (0:00:01.222) 0:00:03.826 ********** 2025-06-03 15:32:37.065112 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-03 15:32:37.065118 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-03 15:32:37.065123 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-03 15:32:37.065128 | orchestrator | 2025-06-03 15:32:37.065133 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-03 15:32:37.065138 | orchestrator | Tuesday 03 June 2025 15:32:08 +0000 (0:00:01.459) 0:00:05.285 ********** 2025-06-03 15:32:37.065143 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-03 15:32:37.065149 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-03 15:32:37.065154 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-03 15:32:37.065160 | orchestrator | 2025-06-03 15:32:37.065165 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-03 15:32:37.065170 | orchestrator | Tuesday 03 June 2025 15:32:11 +0000 (0:00:03.573) 0:00:08.859 ********** 2025-06-03 15:32:37.065175 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:37.065180 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:37.065185 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:37.065190 | orchestrator | 2025-06-03 15:32:37.065196 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-03 15:32:37.065201 | orchestrator | Tuesday 03 June 2025 15:32:14 +0000 (0:00:02.892) 0:00:11.751 ********** 2025-06-03 15:32:37.065206 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:37.065211 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:37.065216 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:37.065221 | orchestrator | 2025-06-03 15:32:37.065226 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:32:37.065232 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:37.065238 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:37.065244 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:37.065249 | orchestrator | 2025-06-03 15:32:37.065254 | orchestrator | 2025-06-03 15:32:37.065259 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:32:37.065264 | orchestrator | Tuesday 03 June 2025 15:32:17 +0000 (0:00:02.700) 0:00:14.452 ********** 2025-06-03 15:32:37.065270 | orchestrator | =============================================================================== 2025-06-03 15:32:37.065277 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.57s 2025-06-03 15:32:37.065285 | orchestrator | memcached : Check memcached container ----------------------------------- 2.89s 2025-06-03 15:32:37.065293 | orchestrator | memcached : Restart memcached container --------------------------------- 2.70s 2025-06-03 15:32:37.065303 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.46s 2025-06-03 15:32:37.065350 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.22s 2025-06-03 15:32:37.065361 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2025-06-03 15:32:37.065369 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.73s 2025-06-03 15:32:37.065377 | orchestrator | 2025-06-03 15:32:37.065386 | orchestrator | 2025-06-03 15:32:37.065393 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:32:37.065402 | orchestrator | 2025-06-03 15:32:37.065411 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:32:37.065420 | orchestrator | Tuesday 03 June 2025 15:32:03 +0000 (0:00:00.584) 0:00:00.584 ********** 2025-06-03 15:32:37.065428 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:32:37.065438 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:32:37.065445 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:32:37.065452 | orchestrator | 2025-06-03 15:32:37.065458 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:32:37.065477 | orchestrator | Tuesday 03 June 2025 15:32:03 +0000 (0:00:00.472) 0:00:01.056 ********** 2025-06-03 15:32:37.065484 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-03 15:32:37.065490 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-03 15:32:37.065496 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-03 15:32:37.065502 | orchestrator | 2025-06-03 15:32:37.065508 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-03 15:32:37.065514 | orchestrator | 2025-06-03 15:32:37.065520 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-03 15:32:37.065526 | orchestrator | Tuesday 03 June 2025 15:32:04 +0000 (0:00:01.405) 0:00:02.462 ********** 2025-06-03 15:32:37.065532 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:32:37.065539 | orchestrator | 2025-06-03 15:32:37.065545 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-03 15:32:37.065551 | orchestrator | Tuesday 03 June 2025 15:32:06 +0000 (0:00:01.575) 0:00:04.037 ********** 2025-06-03 15:32:37.065559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065659 | orchestrator | 2025-06-03 15:32:37.065666 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-03 15:32:37.065672 | orchestrator | Tuesday 03 June 2025 15:32:08 +0000 (0:00:02.297) 0:00:06.335 ********** 2025-06-03 15:32:37.065679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065735 | orchestrator | 2025-06-03 15:32:37.065741 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-03 15:32:37.065747 | orchestrator | Tuesday 03 June 2025 15:32:13 +0000 (0:00:04.730) 0:00:11.065 ********** 2025-06-03 15:32:37.065753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065800 | orchestrator | 2025-06-03 15:32:37.065809 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-03 15:32:37.065816 | orchestrator | Tuesday 03 June 2025 15:32:16 +0000 (0:00:03.247) 0:00:14.313 ********** 2025-06-03 15:32:37.065823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:37.065862 | orchestrator | 2025-06-03 15:32:37.065867 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-03 15:32:37.065876 | orchestrator | Tuesday 03 June 2025 15:32:19 +0000 (0:00:02.267) 0:00:16.580 ********** 2025-06-03 15:32:37.065881 | orchestrator | 2025-06-03 15:32:37.065886 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-03 15:32:37.065892 | orchestrator | Tuesday 03 June 2025 15:32:19 +0000 (0:00:00.095) 0:00:16.676 ********** 2025-06-03 15:32:37.065897 | orchestrator | 2025-06-03 15:32:37.065903 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-03 15:32:37.065908 | orchestrator | Tuesday 03 June 2025 15:32:19 +0000 (0:00:00.087) 0:00:16.763 ********** 2025-06-03 15:32:37.065913 | orchestrator | 2025-06-03 15:32:37.065918 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-03 15:32:37.065924 | orchestrator | Tuesday 03 June 2025 15:32:19 +0000 (0:00:00.057) 0:00:16.821 ********** 2025-06-03 15:32:37.065929 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:37.065934 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:37.065939 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:37.065944 | orchestrator | 2025-06-03 15:32:37.065949 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-03 15:32:37.065955 | orchestrator | Tuesday 03 June 2025 15:32:29 +0000 (0:00:10.366) 0:00:27.187 ********** 2025-06-03 15:32:37.065960 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:37.065969 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:37.065974 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:37.065979 | orchestrator | 2025-06-03 15:32:37.065984 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:32:37.065990 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:37.065995 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:37.066000 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:37.066005 | orchestrator | 2025-06-03 15:32:37.066011 | orchestrator | 2025-06-03 15:32:37.066057 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:32:37.066063 | orchestrator | Tuesday 03 June 2025 15:32:33 +0000 (0:00:03.927) 0:00:31.115 ********** 2025-06-03 15:32:37.066068 | orchestrator | =============================================================================== 2025-06-03 15:32:37.066073 | orchestrator | redis : Restart redis container ---------------------------------------- 10.37s 2025-06-03 15:32:37.066078 | orchestrator | redis : Copying over default config.json files -------------------------- 4.73s 2025-06-03 15:32:37.066083 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.93s 2025-06-03 15:32:37.066089 | orchestrator | redis : Copying over redis config files --------------------------------- 3.25s 2025-06-03 15:32:37.066094 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.30s 2025-06-03 15:32:37.066099 | orchestrator | redis : Check redis containers ------------------------------------------ 2.27s 2025-06-03 15:32:37.066104 | orchestrator | redis : include_tasks --------------------------------------------------- 1.58s 2025-06-03 15:32:37.066109 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.41s 2025-06-03 15:32:37.066115 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-06-03 15:32:37.066120 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2025-06-03 15:32:37.066303 | orchestrator | 2025-06-03 15:32:37 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:37.067302 | orchestrator | 2025-06-03 15:32:37 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:37.067933 | orchestrator | 2025-06-03 15:32:37 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:37.068567 | orchestrator | 2025-06-03 15:32:37 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:37.069310 | orchestrator | 2025-06-03 15:32:37 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:37.069372 | orchestrator | 2025-06-03 15:32:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:40.105941 | orchestrator | 2025-06-03 15:32:40 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:40.108385 | orchestrator | 2025-06-03 15:32:40 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:40.110213 | orchestrator | 2025-06-03 15:32:40 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:40.112234 | orchestrator | 2025-06-03 15:32:40 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:40.114222 | orchestrator | 2025-06-03 15:32:40 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:40.114263 | orchestrator | 2025-06-03 15:32:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:43.145781 | orchestrator | 2025-06-03 15:32:43 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:43.146151 | orchestrator | 2025-06-03 15:32:43 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:43.146737 | orchestrator | 2025-06-03 15:32:43 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:43.147206 | orchestrator | 2025-06-03 15:32:43 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:43.148968 | orchestrator | 2025-06-03 15:32:43 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:43.149005 | orchestrator | 2025-06-03 15:32:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:46.192427 | orchestrator | 2025-06-03 15:32:46 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:46.194560 | orchestrator | 2025-06-03 15:32:46 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:46.195059 | orchestrator | 2025-06-03 15:32:46 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:46.197122 | orchestrator | 2025-06-03 15:32:46 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:46.198099 | orchestrator | 2025-06-03 15:32:46 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:46.198138 | orchestrator | 2025-06-03 15:32:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:49.235919 | orchestrator | 2025-06-03 15:32:49 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:49.236016 | orchestrator | 2025-06-03 15:32:49 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:49.236085 | orchestrator | 2025-06-03 15:32:49 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:49.236992 | orchestrator | 2025-06-03 15:32:49 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:49.238223 | orchestrator | 2025-06-03 15:32:49 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:49.238243 | orchestrator | 2025-06-03 15:32:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:52.281133 | orchestrator | 2025-06-03 15:32:52 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:52.283184 | orchestrator | 2025-06-03 15:32:52 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:52.283450 | orchestrator | 2025-06-03 15:32:52 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:52.284257 | orchestrator | 2025-06-03 15:32:52 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:52.286134 | orchestrator | 2025-06-03 15:32:52 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:52.286176 | orchestrator | 2025-06-03 15:32:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:55.330841 | orchestrator | 2025-06-03 15:32:55 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:55.333008 | orchestrator | 2025-06-03 15:32:55 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:55.334237 | orchestrator | 2025-06-03 15:32:55 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:55.338414 | orchestrator | 2025-06-03 15:32:55 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:55.339080 | orchestrator | 2025-06-03 15:32:55 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:55.339142 | orchestrator | 2025-06-03 15:32:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:58.381854 | orchestrator | 2025-06-03 15:32:58 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:32:58.385466 | orchestrator | 2025-06-03 15:32:58 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:32:58.386403 | orchestrator | 2025-06-03 15:32:58 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:32:58.391255 | orchestrator | 2025-06-03 15:32:58 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:32:58.392297 | orchestrator | 2025-06-03 15:32:58 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:32:58.392419 | orchestrator | 2025-06-03 15:32:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:01.435881 | orchestrator | 2025-06-03 15:33:01 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:33:01.436000 | orchestrator | 2025-06-03 15:33:01 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:01.436016 | orchestrator | 2025-06-03 15:33:01 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:01.436023 | orchestrator | 2025-06-03 15:33:01 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:01.436029 | orchestrator | 2025-06-03 15:33:01 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:01.436036 | orchestrator | 2025-06-03 15:33:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:04.456858 | orchestrator | 2025-06-03 15:33:04 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:33:04.457024 | orchestrator | 2025-06-03 15:33:04 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:04.458125 | orchestrator | 2025-06-03 15:33:04 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:04.459191 | orchestrator | 2025-06-03 15:33:04 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:04.459902 | orchestrator | 2025-06-03 15:33:04 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:04.459930 | orchestrator | 2025-06-03 15:33:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:07.488721 | orchestrator | 2025-06-03 15:33:07 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state STARTED 2025-06-03 15:33:07.491156 | orchestrator | 2025-06-03 15:33:07 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:07.492430 | orchestrator | 2025-06-03 15:33:07 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:07.493031 | orchestrator | 2025-06-03 15:33:07 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:07.493985 | orchestrator | 2025-06-03 15:33:07 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:07.494118 | orchestrator | 2025-06-03 15:33:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:10.533584 | orchestrator | 2025-06-03 15:33:10 | INFO  | Task 65b226ae-c9e4-4700-b322-e6d615727e36 is in state SUCCESS 2025-06-03 15:33:10.535919 | orchestrator | 2025-06-03 15:33:10.536000 | orchestrator | 2025-06-03 15:33:10.536016 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:33:10.536029 | orchestrator | 2025-06-03 15:33:10.536041 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:33:10.536078 | orchestrator | Tuesday 03 June 2025 15:32:02 +0000 (0:00:00.981) 0:00:00.981 ********** 2025-06-03 15:33:10.536090 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:10.536102 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:10.536113 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:10.536124 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:33:10.536135 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:33:10.536145 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:33:10.536156 | orchestrator | 2025-06-03 15:33:10.536167 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:33:10.536178 | orchestrator | Tuesday 03 June 2025 15:32:03 +0000 (0:00:00.999) 0:00:01.981 ********** 2025-06-03 15:33:10.536189 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:10.536200 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:10.536211 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:10.536222 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:10.536233 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:10.536243 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:10.536254 | orchestrator | 2025-06-03 15:33:10.536265 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-03 15:33:10.536276 | orchestrator | 2025-06-03 15:33:10.536287 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-03 15:33:10.536297 | orchestrator | Tuesday 03 June 2025 15:32:05 +0000 (0:00:01.928) 0:00:03.909 ********** 2025-06-03 15:33:10.536310 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:33:10.536322 | orchestrator | 2025-06-03 15:33:10.536333 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-03 15:33:10.536343 | orchestrator | Tuesday 03 June 2025 15:32:08 +0000 (0:00:03.252) 0:00:07.161 ********** 2025-06-03 15:33:10.536354 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-03 15:33:10.536365 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-03 15:33:10.536376 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-03 15:33:10.536387 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-03 15:33:10.536398 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-03 15:33:10.536424 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-03 15:33:10.536435 | orchestrator | 2025-06-03 15:33:10.536446 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-03 15:33:10.536457 | orchestrator | Tuesday 03 June 2025 15:32:11 +0000 (0:00:03.309) 0:00:10.471 ********** 2025-06-03 15:33:10.536468 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-03 15:33:10.536481 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-03 15:33:10.536494 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-03 15:33:10.536506 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-03 15:33:10.536519 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-03 15:33:10.536532 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-03 15:33:10.536545 | orchestrator | 2025-06-03 15:33:10.536557 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-03 15:33:10.536569 | orchestrator | Tuesday 03 June 2025 15:32:14 +0000 (0:00:03.020) 0:00:13.491 ********** 2025-06-03 15:33:10.536582 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-03 15:33:10.536594 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:10.536607 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-03 15:33:10.536626 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:10.536805 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-03 15:33:10.536831 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-03 15:33:10.536843 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:10.536854 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-03 15:33:10.536865 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:10.536875 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:10.536886 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-03 15:33:10.536897 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:10.536907 | orchestrator | 2025-06-03 15:33:10.536918 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-03 15:33:10.536930 | orchestrator | Tuesday 03 June 2025 15:32:16 +0000 (0:00:01.535) 0:00:15.026 ********** 2025-06-03 15:33:10.536941 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:10.536951 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:10.536962 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:10.536973 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:10.536984 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:10.536995 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:10.537006 | orchestrator | 2025-06-03 15:33:10.537016 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-03 15:33:10.537027 | orchestrator | Tuesday 03 June 2025 15:32:17 +0000 (0:00:01.083) 0:00:16.109 ********** 2025-06-03 15:33:10.537064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537212 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537253 | orchestrator | 2025-06-03 15:33:10.537264 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-03 15:33:10.537275 | orchestrator | Tuesday 03 June 2025 15:32:19 +0000 (0:00:02.108) 0:00:18.218 ********** 2025-06-03 15:33:10.537287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537421 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537461 | orchestrator | 2025-06-03 15:33:10.537473 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-03 15:33:10.537484 | orchestrator | Tuesday 03 June 2025 15:32:24 +0000 (0:00:05.155) 0:00:23.374 ********** 2025-06-03 15:33:10.537495 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:10.537506 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:10.537516 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:10.537527 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:10.537538 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:10.537601 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:10.537613 | orchestrator | 2025-06-03 15:33:10.537632 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-03 15:33:10.537687 | orchestrator | Tuesday 03 June 2025 15:32:26 +0000 (0:00:01.836) 0:00:25.210 ********** 2025-06-03 15:33:10.537704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537822 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:10.537882 | orchestrator | 2025-06-03 15:33:10.537893 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:10.537904 | orchestrator | Tuesday 03 June 2025 15:32:29 +0000 (0:00:03.035) 0:00:28.245 ********** 2025-06-03 15:33:10.537915 | orchestrator | 2025-06-03 15:33:10.537931 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:10.537942 | orchestrator | Tuesday 03 June 2025 15:32:29 +0000 (0:00:00.128) 0:00:28.374 ********** 2025-06-03 15:33:10.537953 | orchestrator | 2025-06-03 15:33:10.537964 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:10.537974 | orchestrator | Tuesday 03 June 2025 15:32:29 +0000 (0:00:00.133) 0:00:28.508 ********** 2025-06-03 15:33:10.537985 | orchestrator | 2025-06-03 15:33:10.537995 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:10.538006 | orchestrator | Tuesday 03 June 2025 15:32:29 +0000 (0:00:00.139) 0:00:28.647 ********** 2025-06-03 15:33:10.538076 | orchestrator | 2025-06-03 15:33:10.538088 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:10.538099 | orchestrator | Tuesday 03 June 2025 15:32:30 +0000 (0:00:00.281) 0:00:28.929 ********** 2025-06-03 15:33:10.538109 | orchestrator | 2025-06-03 15:33:10.538120 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:10.538131 | orchestrator | Tuesday 03 June 2025 15:32:30 +0000 (0:00:00.551) 0:00:29.480 ********** 2025-06-03 15:33:10.538141 | orchestrator | 2025-06-03 15:33:10.538152 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-03 15:33:10.538163 | orchestrator | Tuesday 03 June 2025 15:32:31 +0000 (0:00:00.547) 0:00:30.028 ********** 2025-06-03 15:33:10.538173 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:10.538184 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:10.538195 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:10.538220 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:10.538230 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:10.538241 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:10.538252 | orchestrator | 2025-06-03 15:33:10.538262 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-03 15:33:10.538274 | orchestrator | Tuesday 03 June 2025 15:32:37 +0000 (0:00:06.205) 0:00:36.233 ********** 2025-06-03 15:33:10.538285 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:10.538296 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:10.538306 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:10.538331 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:33:10.538342 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:33:10.538353 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:33:10.538364 | orchestrator | 2025-06-03 15:33:10.538375 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-03 15:33:10.538385 | orchestrator | Tuesday 03 June 2025 15:32:38 +0000 (0:00:01.205) 0:00:37.438 ********** 2025-06-03 15:33:10.538396 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:10.538414 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:10.538425 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:10.538436 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:10.538446 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:10.538457 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:10.538467 | orchestrator | 2025-06-03 15:33:10.538478 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-03 15:33:10.538489 | orchestrator | Tuesday 03 June 2025 15:32:47 +0000 (0:00:08.773) 0:00:46.212 ********** 2025-06-03 15:33:10.538506 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-03 15:33:10.538517 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-03 15:33:10.538528 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-03 15:33:10.538539 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-03 15:33:10.538550 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-03 15:33:10.538561 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-03 15:33:10.538571 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-03 15:33:10.538582 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-03 15:33:10.538592 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-03 15:33:10.538603 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-03 15:33:10.538614 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-03 15:33:10.538624 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-03 15:33:10.538657 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:10.538670 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:10.538680 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:10.538703 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:10.538713 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:10.538729 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:10.538740 | orchestrator | 2025-06-03 15:33:10.538751 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-03 15:33:10.538762 | orchestrator | Tuesday 03 June 2025 15:32:55 +0000 (0:00:07.994) 0:00:54.207 ********** 2025-06-03 15:33:10.538772 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-03 15:33:10.538783 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:10.538794 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-03 15:33:10.538805 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:10.538815 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-03 15:33:10.538826 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:10.538837 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-03 15:33:10.538847 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-03 15:33:10.538915 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-03 15:33:10.538956 | orchestrator | 2025-06-03 15:33:10.538975 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-03 15:33:10.539016 | orchestrator | Tuesday 03 June 2025 15:32:57 +0000 (0:00:02.350) 0:00:56.557 ********** 2025-06-03 15:33:10.539029 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-03 15:33:10.539053 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:10.539064 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-03 15:33:10.539075 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:10.539086 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-03 15:33:10.539096 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:10.539107 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-03 15:33:10.539118 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-03 15:33:10.539129 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-03 15:33:10.539139 | orchestrator | 2025-06-03 15:33:10.539150 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-03 15:33:10.539161 | orchestrator | Tuesday 03 June 2025 15:33:01 +0000 (0:00:03.557) 0:01:00.115 ********** 2025-06-03 15:33:10.539172 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:10.539182 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:10.539193 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:10.539204 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:10.539214 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:10.539225 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:10.539235 | orchestrator | 2025-06-03 15:33:10.539246 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:33:10.539258 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:33:10.539278 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:33:10.539289 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:33:10.539300 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:33:10.539311 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:33:10.539322 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:33:10.539332 | orchestrator | 2025-06-03 15:33:10.539343 | orchestrator | 2025-06-03 15:33:10.539354 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:33:10.539364 | orchestrator | Tuesday 03 June 2025 15:33:08 +0000 (0:00:07.744) 0:01:07.860 ********** 2025-06-03 15:33:10.539375 | orchestrator | =============================================================================== 2025-06-03 15:33:10.539386 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.52s 2025-06-03 15:33:10.539397 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.99s 2025-06-03 15:33:10.539407 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.20s 2025-06-03 15:33:10.539418 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.16s 2025-06-03 15:33:10.539429 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.56s 2025-06-03 15:33:10.539439 | orchestrator | module-load : Load modules ---------------------------------------------- 3.31s 2025-06-03 15:33:10.539463 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.25s 2025-06-03 15:33:10.539473 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.04s 2025-06-03 15:33:10.539484 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.02s 2025-06-03 15:33:10.539494 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.35s 2025-06-03 15:33:10.539505 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.11s 2025-06-03 15:33:10.539516 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.93s 2025-06-03 15:33:10.539533 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.84s 2025-06-03 15:33:10.539544 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.78s 2025-06-03 15:33:10.539554 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.54s 2025-06-03 15:33:10.539565 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.21s 2025-06-03 15:33:10.539575 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.08s 2025-06-03 15:33:10.539586 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.00s 2025-06-03 15:33:10.539597 | orchestrator | 2025-06-03 15:33:10 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:10.539607 | orchestrator | 2025-06-03 15:33:10 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:10.539618 | orchestrator | 2025-06-03 15:33:10 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:10.539629 | orchestrator | 2025-06-03 15:33:10 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:10.539827 | orchestrator | 2025-06-03 15:33:10 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:10.539845 | orchestrator | 2025-06-03 15:33:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:13.577014 | orchestrator | 2025-06-03 15:33:13 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:13.580067 | orchestrator | 2025-06-03 15:33:13 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:13.585171 | orchestrator | 2025-06-03 15:33:13 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:13.590263 | orchestrator | 2025-06-03 15:33:13 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:13.593552 | orchestrator | 2025-06-03 15:33:13 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:13.593740 | orchestrator | 2025-06-03 15:33:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:16.624419 | orchestrator | 2025-06-03 15:33:16 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:16.625562 | orchestrator | 2025-06-03 15:33:16 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:16.629072 | orchestrator | 2025-06-03 15:33:16 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:16.629949 | orchestrator | 2025-06-03 15:33:16 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:16.630788 | orchestrator | 2025-06-03 15:33:16 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:16.630817 | orchestrator | 2025-06-03 15:33:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:19.674419 | orchestrator | 2025-06-03 15:33:19 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:19.674497 | orchestrator | 2025-06-03 15:33:19 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:19.678787 | orchestrator | 2025-06-03 15:33:19 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:19.680531 | orchestrator | 2025-06-03 15:33:19 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:19.682785 | orchestrator | 2025-06-03 15:33:19 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:19.682828 | orchestrator | 2025-06-03 15:33:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:22.732536 | orchestrator | 2025-06-03 15:33:22 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:22.733016 | orchestrator | 2025-06-03 15:33:22 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:22.734013 | orchestrator | 2025-06-03 15:33:22 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:22.734962 | orchestrator | 2025-06-03 15:33:22 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:22.736686 | orchestrator | 2025-06-03 15:33:22 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:22.736738 | orchestrator | 2025-06-03 15:33:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:25.785196 | orchestrator | 2025-06-03 15:33:25 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:25.786256 | orchestrator | 2025-06-03 15:33:25 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:25.788148 | orchestrator | 2025-06-03 15:33:25 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:25.790618 | orchestrator | 2025-06-03 15:33:25 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:25.793528 | orchestrator | 2025-06-03 15:33:25 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:25.793561 | orchestrator | 2025-06-03 15:33:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:28.818188 | orchestrator | 2025-06-03 15:33:28 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:28.819406 | orchestrator | 2025-06-03 15:33:28 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:28.820213 | orchestrator | 2025-06-03 15:33:28 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:28.820731 | orchestrator | 2025-06-03 15:33:28 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:28.821369 | orchestrator | 2025-06-03 15:33:28 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:28.821521 | orchestrator | 2025-06-03 15:33:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:31.859263 | orchestrator | 2025-06-03 15:33:31 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:31.861203 | orchestrator | 2025-06-03 15:33:31 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:31.863110 | orchestrator | 2025-06-03 15:33:31 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:31.865328 | orchestrator | 2025-06-03 15:33:31 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:31.866848 | orchestrator | 2025-06-03 15:33:31 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:31.867582 | orchestrator | 2025-06-03 15:33:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:34.904270 | orchestrator | 2025-06-03 15:33:34 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:34.904580 | orchestrator | 2025-06-03 15:33:34 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:34.905089 | orchestrator | 2025-06-03 15:33:34 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:34.905833 | orchestrator | 2025-06-03 15:33:34 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:34.906470 | orchestrator | 2025-06-03 15:33:34 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:34.906712 | orchestrator | 2025-06-03 15:33:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:37.947318 | orchestrator | 2025-06-03 15:33:37 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:37.947862 | orchestrator | 2025-06-03 15:33:37 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:37.948814 | orchestrator | 2025-06-03 15:33:37 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:37.950220 | orchestrator | 2025-06-03 15:33:37 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:37.951246 | orchestrator | 2025-06-03 15:33:37 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:37.951334 | orchestrator | 2025-06-03 15:33:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:40.983080 | orchestrator | 2025-06-03 15:33:40 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:40.984627 | orchestrator | 2025-06-03 15:33:40 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:40.985458 | orchestrator | 2025-06-03 15:33:40 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:40.987326 | orchestrator | 2025-06-03 15:33:40 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:40.987917 | orchestrator | 2025-06-03 15:33:40 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:40.987994 | orchestrator | 2025-06-03 15:33:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:44.027848 | orchestrator | 2025-06-03 15:33:44 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:44.028416 | orchestrator | 2025-06-03 15:33:44 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:44.028993 | orchestrator | 2025-06-03 15:33:44 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:44.032267 | orchestrator | 2025-06-03 15:33:44 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:44.033327 | orchestrator | 2025-06-03 15:33:44 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:44.033374 | orchestrator | 2025-06-03 15:33:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:47.064746 | orchestrator | 2025-06-03 15:33:47 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:47.066399 | orchestrator | 2025-06-03 15:33:47 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:47.069084 | orchestrator | 2025-06-03 15:33:47 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:47.071586 | orchestrator | 2025-06-03 15:33:47 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state STARTED 2025-06-03 15:33:47.072765 | orchestrator | 2025-06-03 15:33:47 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:47.074316 | orchestrator | 2025-06-03 15:33:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:50.127822 | orchestrator | 2025-06-03 15:33:50 | INFO  | Task f7b17fc5-09cc-4239-98a1-e6e60221b5cd is in state STARTED 2025-06-03 15:33:50.129884 | orchestrator | 2025-06-03 15:33:50 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:50.132344 | orchestrator | 2025-06-03 15:33:50 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:50.137410 | orchestrator | 2025-06-03 15:33:50 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:50.144941 | orchestrator | 2025-06-03 15:33:50.145027 | orchestrator | 2025-06-03 15:33:50.145040 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-03 15:33:50.145052 | orchestrator | 2025-06-03 15:33:50.145061 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-03 15:33:50.145071 | orchestrator | Tuesday 03 June 2025 15:29:12 +0000 (0:00:00.232) 0:00:00.232 ********** 2025-06-03 15:33:50.145080 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:33:50.145090 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:33:50.145100 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:33:50.145109 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.145118 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.145128 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.145136 | orchestrator | 2025-06-03 15:33:50.145145 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-03 15:33:50.145155 | orchestrator | Tuesday 03 June 2025 15:29:13 +0000 (0:00:00.922) 0:00:01.155 ********** 2025-06-03 15:33:50.145164 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.145174 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.145183 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.145193 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.145202 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.145211 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.145220 | orchestrator | 2025-06-03 15:33:50.145229 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-03 15:33:50.145238 | orchestrator | Tuesday 03 June 2025 15:29:14 +0000 (0:00:00.879) 0:00:02.035 ********** 2025-06-03 15:33:50.145248 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.145257 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.145267 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.145276 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.145285 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.145294 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.145303 | orchestrator | 2025-06-03 15:33:50.145312 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-03 15:33:50.145321 | orchestrator | Tuesday 03 June 2025 15:29:15 +0000 (0:00:00.776) 0:00:02.811 ********** 2025-06-03 15:33:50.145330 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:50.145339 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:50.145348 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.145358 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.145367 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:50.145414 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.145424 | orchestrator | 2025-06-03 15:33:50.145434 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-03 15:33:50.145444 | orchestrator | Tuesday 03 June 2025 15:29:18 +0000 (0:00:03.176) 0:00:05.987 ********** 2025-06-03 15:33:50.145454 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:50.145463 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:50.145473 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:50.145507 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.145517 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.145527 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.145537 | orchestrator | 2025-06-03 15:33:50.145548 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-03 15:33:50.145558 | orchestrator | Tuesday 03 June 2025 15:29:19 +0000 (0:00:01.274) 0:00:07.262 ********** 2025-06-03 15:33:50.145568 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:50.145578 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:50.145588 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:50.145598 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.145608 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.145626 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.145636 | orchestrator | 2025-06-03 15:33:50.145671 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-03 15:33:50.145681 | orchestrator | Tuesday 03 June 2025 15:29:21 +0000 (0:00:01.383) 0:00:08.646 ********** 2025-06-03 15:33:50.145690 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.145699 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.145708 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.145716 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.145725 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.145735 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.145743 | orchestrator | 2025-06-03 15:33:50.145753 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-03 15:33:50.145762 | orchestrator | Tuesday 03 June 2025 15:29:22 +0000 (0:00:00.809) 0:00:09.455 ********** 2025-06-03 15:33:50.145771 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.145780 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.145790 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.145799 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.145807 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.145817 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.145825 | orchestrator | 2025-06-03 15:33:50.145834 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-03 15:33:50.145844 | orchestrator | Tuesday 03 June 2025 15:29:23 +0000 (0:00:00.951) 0:00:10.407 ********** 2025-06-03 15:33:50.145853 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:33:50.145862 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:33:50.145871 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.145880 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:33:50.145889 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:33:50.145899 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.145908 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:33:50.145917 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:33:50.145926 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.145935 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:33:50.145959 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:33:50.145969 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.145978 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:33:50.145987 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:33:50.145996 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.146006 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:33:50.146068 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:33:50.146088 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.146097 | orchestrator | 2025-06-03 15:33:50.146107 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-03 15:33:50.146116 | orchestrator | Tuesday 03 June 2025 15:29:24 +0000 (0:00:01.050) 0:00:11.458 ********** 2025-06-03 15:33:50.146125 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.146135 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.146144 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.146152 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.146161 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.146171 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.146180 | orchestrator | 2025-06-03 15:33:50.146189 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-03 15:33:50.146199 | orchestrator | Tuesday 03 June 2025 15:29:25 +0000 (0:00:01.321) 0:00:12.779 ********** 2025-06-03 15:33:50.146208 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:33:50.146217 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:33:50.146226 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:33:50.146235 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.146244 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.146252 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.146261 | orchestrator | 2025-06-03 15:33:50.146269 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-03 15:33:50.146278 | orchestrator | Tuesday 03 June 2025 15:29:26 +0000 (0:00:00.692) 0:00:13.472 ********** 2025-06-03 15:33:50.146287 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:50.146296 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.146304 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:50.146338 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.146348 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.146357 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:50.146366 | orchestrator | 2025-06-03 15:33:50.146375 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-03 15:33:50.146385 | orchestrator | Tuesday 03 June 2025 15:29:32 +0000 (0:00:05.989) 0:00:19.461 ********** 2025-06-03 15:33:50.146394 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.146403 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.146412 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.146421 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.146430 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.146439 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.146449 | orchestrator | 2025-06-03 15:33:50.146458 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-03 15:33:50.146467 | orchestrator | Tuesday 03 June 2025 15:29:33 +0000 (0:00:00.941) 0:00:20.403 ********** 2025-06-03 15:33:50.146476 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.146486 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.146494 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.146503 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.146518 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.146527 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.146536 | orchestrator | 2025-06-03 15:33:50.146545 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-03 15:33:50.146556 | orchestrator | Tuesday 03 June 2025 15:29:35 +0000 (0:00:02.459) 0:00:22.862 ********** 2025-06-03 15:33:50.146565 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.146574 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.146583 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.146608 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.146617 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.146626 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.146635 | orchestrator | 2025-06-03 15:33:50.146663 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-03 15:33:50.146681 | orchestrator | Tuesday 03 June 2025 15:29:36 +0000 (0:00:00.877) 0:00:23.739 ********** 2025-06-03 15:33:50.146690 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-03 15:33:50.146699 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-03 15:33:50.146709 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.146719 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-03 15:33:50.146729 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-03 15:33:50.146739 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.146749 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-03 15:33:50.146759 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-03 15:33:50.146768 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.146778 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-03 15:33:50.146788 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-03 15:33:50.146797 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.146807 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-03 15:33:50.146818 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-03 15:33:50.146827 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.146835 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-03 15:33:50.146844 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-03 15:33:50.146852 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.146861 | orchestrator | 2025-06-03 15:33:50.146870 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-03 15:33:50.146888 | orchestrator | Tuesday 03 June 2025 15:29:37 +0000 (0:00:01.118) 0:00:24.857 ********** 2025-06-03 15:33:50.146899 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.146909 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.146919 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.146927 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.146936 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.146946 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.146956 | orchestrator | 2025-06-03 15:33:50.146966 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-03 15:33:50.146975 | orchestrator | 2025-06-03 15:33:50.146986 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-03 15:33:50.146995 | orchestrator | Tuesday 03 June 2025 15:29:38 +0000 (0:00:01.399) 0:00:26.257 ********** 2025-06-03 15:33:50.147005 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.147015 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.147025 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.147035 | orchestrator | 2025-06-03 15:33:50.147046 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-03 15:33:50.147056 | orchestrator | Tuesday 03 June 2025 15:29:40 +0000 (0:00:01.623) 0:00:27.880 ********** 2025-06-03 15:33:50.147064 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.147073 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.147082 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.147091 | orchestrator | 2025-06-03 15:33:50.147101 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-03 15:33:50.147110 | orchestrator | Tuesday 03 June 2025 15:29:41 +0000 (0:00:01.168) 0:00:29.048 ********** 2025-06-03 15:33:50.147120 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.147131 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.147140 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.147149 | orchestrator | 2025-06-03 15:33:50.147158 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-03 15:33:50.147168 | orchestrator | Tuesday 03 June 2025 15:29:42 +0000 (0:00:01.106) 0:00:30.155 ********** 2025-06-03 15:33:50.147177 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.147187 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.147203 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.147213 | orchestrator | 2025-06-03 15:33:50.147223 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-03 15:33:50.147233 | orchestrator | Tuesday 03 June 2025 15:29:43 +0000 (0:00:00.788) 0:00:30.944 ********** 2025-06-03 15:33:50.147242 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.147252 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.147262 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.147272 | orchestrator | 2025-06-03 15:33:50.147282 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-03 15:33:50.147291 | orchestrator | Tuesday 03 June 2025 15:29:43 +0000 (0:00:00.355) 0:00:31.300 ********** 2025-06-03 15:33:50.147300 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:33:50.147309 | orchestrator | 2025-06-03 15:33:50.147318 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-03 15:33:50.147329 | orchestrator | Tuesday 03 June 2025 15:29:44 +0000 (0:00:00.837) 0:00:32.138 ********** 2025-06-03 15:33:50.147339 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.147349 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.147358 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.147366 | orchestrator | 2025-06-03 15:33:50.147376 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-03 15:33:50.147391 | orchestrator | Tuesday 03 June 2025 15:29:47 +0000 (0:00:02.967) 0:00:35.105 ********** 2025-06-03 15:33:50.147401 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.147411 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.147421 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.147431 | orchestrator | 2025-06-03 15:33:50.147441 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-03 15:33:50.147451 | orchestrator | Tuesday 03 June 2025 15:29:48 +0000 (0:00:01.034) 0:00:36.141 ********** 2025-06-03 15:33:50.147461 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.147471 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.147481 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.147490 | orchestrator | 2025-06-03 15:33:50.147500 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-03 15:33:50.147509 | orchestrator | Tuesday 03 June 2025 15:29:49 +0000 (0:00:01.051) 0:00:37.193 ********** 2025-06-03 15:33:50.147519 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.147530 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.147540 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.147549 | orchestrator | 2025-06-03 15:33:50.147558 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-03 15:33:50.147568 | orchestrator | Tuesday 03 June 2025 15:29:51 +0000 (0:00:01.742) 0:00:38.935 ********** 2025-06-03 15:33:50.147578 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.147588 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.147598 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.147607 | orchestrator | 2025-06-03 15:33:50.147617 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-03 15:33:50.147627 | orchestrator | Tuesday 03 June 2025 15:29:51 +0000 (0:00:00.398) 0:00:39.334 ********** 2025-06-03 15:33:50.147637 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.147696 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.147707 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.147717 | orchestrator | 2025-06-03 15:33:50.147728 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-03 15:33:50.147738 | orchestrator | Tuesday 03 June 2025 15:29:52 +0000 (0:00:00.445) 0:00:39.780 ********** 2025-06-03 15:33:50.147762 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.147771 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.147781 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.147792 | orchestrator | 2025-06-03 15:33:50.147809 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-03 15:33:50.147819 | orchestrator | Tuesday 03 June 2025 15:29:53 +0000 (0:00:01.479) 0:00:41.260 ********** 2025-06-03 15:33:50.147838 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-03 15:33:50.147848 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-03 15:33:50.147858 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-03 15:33:50.147868 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-03 15:33:50.147879 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-03 15:33:50.147889 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-03 15:33:50.147899 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-03 15:33:50.147909 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-03 15:33:50.147919 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-03 15:33:50.147929 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-03 15:33:50.147939 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-03 15:33:50.147948 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-03 15:33:50.147957 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-03 15:33:50.147968 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-03 15:33:50.147977 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.147986 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.147995 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.148003 | orchestrator | 2025-06-03 15:33:50.148012 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-03 15:33:50.148021 | orchestrator | Tuesday 03 June 2025 15:30:49 +0000 (0:00:55.897) 0:01:37.157 ********** 2025-06-03 15:33:50.148029 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.148037 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.148046 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.148055 | orchestrator | 2025-06-03 15:33:50.148064 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-03 15:33:50.148073 | orchestrator | Tuesday 03 June 2025 15:30:50 +0000 (0:00:00.256) 0:01:37.413 ********** 2025-06-03 15:33:50.148082 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.148091 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.148100 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.148109 | orchestrator | 2025-06-03 15:33:50.148117 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-03 15:33:50.148126 | orchestrator | Tuesday 03 June 2025 15:30:51 +0000 (0:00:01.062) 0:01:38.475 ********** 2025-06-03 15:33:50.148134 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.148143 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.148158 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.148166 | orchestrator | 2025-06-03 15:33:50.148175 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-03 15:33:50.148183 | orchestrator | Tuesday 03 June 2025 15:30:52 +0000 (0:00:01.364) 0:01:39.840 ********** 2025-06-03 15:33:50.148192 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.148200 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.148209 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.148218 | orchestrator | 2025-06-03 15:33:50.148227 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-03 15:33:50.148236 | orchestrator | Tuesday 03 June 2025 15:31:10 +0000 (0:00:17.914) 0:01:57.754 ********** 2025-06-03 15:33:50.148245 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.148253 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.148260 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.148267 | orchestrator | 2025-06-03 15:33:50.148275 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-03 15:33:50.148282 | orchestrator | Tuesday 03 June 2025 15:31:11 +0000 (0:00:00.786) 0:01:58.541 ********** 2025-06-03 15:33:50.148290 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.148297 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.148305 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.148313 | orchestrator | 2025-06-03 15:33:50.148321 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-03 15:33:50.148328 | orchestrator | Tuesday 03 June 2025 15:31:11 +0000 (0:00:00.782) 0:01:59.323 ********** 2025-06-03 15:33:50.148336 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.148344 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.148352 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.148359 | orchestrator | 2025-06-03 15:33:50.148366 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-03 15:33:50.148378 | orchestrator | Tuesday 03 June 2025 15:31:12 +0000 (0:00:00.657) 0:01:59.981 ********** 2025-06-03 15:33:50.148386 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.148393 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.148400 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.148408 | orchestrator | 2025-06-03 15:33:50.148415 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-03 15:33:50.148423 | orchestrator | Tuesday 03 June 2025 15:31:13 +0000 (0:00:00.991) 0:02:00.972 ********** 2025-06-03 15:33:50.148430 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.148438 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.148455 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.148463 | orchestrator | 2025-06-03 15:33:50.148470 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-03 15:33:50.148477 | orchestrator | Tuesday 03 June 2025 15:31:13 +0000 (0:00:00.335) 0:02:01.308 ********** 2025-06-03 15:33:50.148485 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.148493 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.148501 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.148508 | orchestrator | 2025-06-03 15:33:50.148516 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-03 15:33:50.148523 | orchestrator | Tuesday 03 June 2025 15:31:14 +0000 (0:00:00.656) 0:02:01.964 ********** 2025-06-03 15:33:50.148531 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.148538 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.148546 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.148553 | orchestrator | 2025-06-03 15:33:50.148560 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-03 15:33:50.148568 | orchestrator | Tuesday 03 June 2025 15:31:15 +0000 (0:00:00.733) 0:02:02.698 ********** 2025-06-03 15:33:50.148575 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.148583 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.148590 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.148602 | orchestrator | 2025-06-03 15:33:50.148610 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-03 15:33:50.148617 | orchestrator | Tuesday 03 June 2025 15:31:16 +0000 (0:00:01.279) 0:02:03.978 ********** 2025-06-03 15:33:50.148625 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:50.148632 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:50.148639 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:50.148666 | orchestrator | 2025-06-03 15:33:50.148674 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-03 15:33:50.148681 | orchestrator | Tuesday 03 June 2025 15:31:17 +0000 (0:00:00.814) 0:02:04.793 ********** 2025-06-03 15:33:50.148689 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.148696 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.148704 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.148711 | orchestrator | 2025-06-03 15:33:50.148718 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-03 15:33:50.148726 | orchestrator | Tuesday 03 June 2025 15:31:17 +0000 (0:00:00.321) 0:02:05.114 ********** 2025-06-03 15:33:50.148733 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.148740 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.148748 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.148755 | orchestrator | 2025-06-03 15:33:50.149275 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-03 15:33:50.149301 | orchestrator | Tuesday 03 June 2025 15:31:18 +0000 (0:00:00.331) 0:02:05.446 ********** 2025-06-03 15:33:50.149310 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.149318 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.149327 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.149336 | orchestrator | 2025-06-03 15:33:50.149345 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-03 15:33:50.149354 | orchestrator | Tuesday 03 June 2025 15:31:19 +0000 (0:00:00.962) 0:02:06.408 ********** 2025-06-03 15:33:50.149362 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.149370 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.149379 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.149387 | orchestrator | 2025-06-03 15:33:50.149397 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-03 15:33:50.149406 | orchestrator | Tuesday 03 June 2025 15:31:19 +0000 (0:00:00.630) 0:02:07.039 ********** 2025-06-03 15:33:50.149415 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-03 15:33:50.149424 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-03 15:33:50.149433 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-03 15:33:50.149442 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-03 15:33:50.149451 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-03 15:33:50.149460 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-03 15:33:50.149469 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-03 15:33:50.149478 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-03 15:33:50.149486 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-03 15:33:50.149495 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-03 15:33:50.149504 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-03 15:33:50.149512 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-03 15:33:50.149520 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-03 15:33:50.149546 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-03 15:33:50.149554 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-03 15:33:50.149561 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-03 15:33:50.149568 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-03 15:33:50.149576 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-03 15:33:50.149583 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-03 15:33:50.149591 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-03 15:33:50.149598 | orchestrator | 2025-06-03 15:33:50.149605 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-03 15:33:50.149613 | orchestrator | 2025-06-03 15:33:50.149620 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-03 15:33:50.149628 | orchestrator | Tuesday 03 June 2025 15:31:22 +0000 (0:00:02.973) 0:02:10.012 ********** 2025-06-03 15:33:50.149635 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:33:50.149699 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:33:50.149708 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:33:50.149716 | orchestrator | 2025-06-03 15:33:50.149723 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-03 15:33:50.149730 | orchestrator | Tuesday 03 June 2025 15:31:23 +0000 (0:00:00.589) 0:02:10.602 ********** 2025-06-03 15:33:50.149738 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:33:50.149745 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:33:50.149752 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:33:50.149760 | orchestrator | 2025-06-03 15:33:50.149767 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-03 15:33:50.149775 | orchestrator | Tuesday 03 June 2025 15:31:23 +0000 (0:00:00.637) 0:02:11.239 ********** 2025-06-03 15:33:50.149782 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:33:50.149790 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:33:50.149797 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:33:50.149804 | orchestrator | 2025-06-03 15:33:50.149812 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-03 15:33:50.149820 | orchestrator | Tuesday 03 June 2025 15:31:24 +0000 (0:00:00.354) 0:02:11.594 ********** 2025-06-03 15:33:50.149827 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:33:50.149835 | orchestrator | 2025-06-03 15:33:50.149848 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-03 15:33:50.149856 | orchestrator | Tuesday 03 June 2025 15:31:24 +0000 (0:00:00.772) 0:02:12.367 ********** 2025-06-03 15:33:50.149863 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.149871 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.149879 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.149886 | orchestrator | 2025-06-03 15:33:50.149894 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-03 15:33:50.149901 | orchestrator | Tuesday 03 June 2025 15:31:25 +0000 (0:00:00.340) 0:02:12.707 ********** 2025-06-03 15:33:50.149909 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.149916 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.149923 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.149929 | orchestrator | 2025-06-03 15:33:50.149936 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-03 15:33:50.149942 | orchestrator | Tuesday 03 June 2025 15:31:25 +0000 (0:00:00.313) 0:02:13.021 ********** 2025-06-03 15:33:50.149950 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.149956 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.149969 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.149975 | orchestrator | 2025-06-03 15:33:50.149982 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-03 15:33:50.149988 | orchestrator | Tuesday 03 June 2025 15:31:26 +0000 (0:00:00.385) 0:02:13.407 ********** 2025-06-03 15:33:50.149995 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:50.150002 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:50.150008 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:50.150055 | orchestrator | 2025-06-03 15:33:50.150064 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-03 15:33:50.150071 | orchestrator | Tuesday 03 June 2025 15:31:27 +0000 (0:00:01.579) 0:02:14.986 ********** 2025-06-03 15:33:50.150080 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:50.150087 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:50.150095 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:50.150103 | orchestrator | 2025-06-03 15:33:50.150110 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-03 15:33:50.150118 | orchestrator | 2025-06-03 15:33:50.150126 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-03 15:33:50.150133 | orchestrator | Tuesday 03 June 2025 15:31:37 +0000 (0:00:10.010) 0:02:24.997 ********** 2025-06-03 15:33:50.150139 | orchestrator | ok: [testbed-manager] 2025-06-03 15:33:50.150146 | orchestrator | 2025-06-03 15:33:50.150153 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-03 15:33:50.150160 | orchestrator | Tuesday 03 June 2025 15:31:38 +0000 (0:00:00.766) 0:02:25.764 ********** 2025-06-03 15:33:50.150167 | orchestrator | changed: [testbed-manager] 2025-06-03 15:33:50.150174 | orchestrator | 2025-06-03 15:33:50.150182 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-03 15:33:50.150189 | orchestrator | Tuesday 03 June 2025 15:31:38 +0000 (0:00:00.475) 0:02:26.240 ********** 2025-06-03 15:33:50.150196 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-03 15:33:50.150203 | orchestrator | 2025-06-03 15:33:50.150212 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-03 15:33:50.150219 | orchestrator | Tuesday 03 June 2025 15:31:39 +0000 (0:00:01.050) 0:02:27.290 ********** 2025-06-03 15:33:50.150226 | orchestrator | changed: [testbed-manager] 2025-06-03 15:33:50.150234 | orchestrator | 2025-06-03 15:33:50.150250 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-03 15:33:50.150258 | orchestrator | Tuesday 03 June 2025 15:31:40 +0000 (0:00:00.877) 0:02:28.168 ********** 2025-06-03 15:33:50.150265 | orchestrator | changed: [testbed-manager] 2025-06-03 15:33:50.150273 | orchestrator | 2025-06-03 15:33:50.150280 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-03 15:33:50.150287 | orchestrator | Tuesday 03 June 2025 15:31:41 +0000 (0:00:00.619) 0:02:28.787 ********** 2025-06-03 15:33:50.150295 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-03 15:33:50.150302 | orchestrator | 2025-06-03 15:33:50.150309 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-03 15:33:50.150316 | orchestrator | Tuesday 03 June 2025 15:31:43 +0000 (0:00:01.828) 0:02:30.615 ********** 2025-06-03 15:33:50.150324 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-03 15:33:50.150331 | orchestrator | 2025-06-03 15:33:50.150338 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-03 15:33:50.150345 | orchestrator | Tuesday 03 June 2025 15:31:44 +0000 (0:00:00.948) 0:02:31.564 ********** 2025-06-03 15:33:50.150353 | orchestrator | changed: [testbed-manager] 2025-06-03 15:33:50.150360 | orchestrator | 2025-06-03 15:33:50.150367 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-03 15:33:50.150374 | orchestrator | Tuesday 03 June 2025 15:31:44 +0000 (0:00:00.518) 0:02:32.082 ********** 2025-06-03 15:33:50.150381 | orchestrator | changed: [testbed-manager] 2025-06-03 15:33:50.150388 | orchestrator | 2025-06-03 15:33:50.150397 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-03 15:33:50.150417 | orchestrator | 2025-06-03 15:33:50.150425 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-03 15:33:50.150432 | orchestrator | Tuesday 03 June 2025 15:31:45 +0000 (0:00:00.566) 0:02:32.648 ********** 2025-06-03 15:33:50.150440 | orchestrator | ok: [testbed-manager] 2025-06-03 15:33:50.150448 | orchestrator | 2025-06-03 15:33:50.150456 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-03 15:33:50.150463 | orchestrator | Tuesday 03 June 2025 15:31:45 +0000 (0:00:00.180) 0:02:32.829 ********** 2025-06-03 15:33:50.150471 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-03 15:33:50.150479 | orchestrator | 2025-06-03 15:33:50.150488 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-03 15:33:50.150497 | orchestrator | Tuesday 03 June 2025 15:31:45 +0000 (0:00:00.448) 0:02:33.278 ********** 2025-06-03 15:33:50.150507 | orchestrator | ok: [testbed-manager] 2025-06-03 15:33:50.150516 | orchestrator | 2025-06-03 15:33:50.150530 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-03 15:33:50.150539 | orchestrator | Tuesday 03 June 2025 15:31:46 +0000 (0:00:00.825) 0:02:34.103 ********** 2025-06-03 15:33:50.150548 | orchestrator | ok: [testbed-manager] 2025-06-03 15:33:50.150556 | orchestrator | 2025-06-03 15:33:50.150565 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-03 15:33:50.150573 | orchestrator | Tuesday 03 June 2025 15:31:48 +0000 (0:00:01.625) 0:02:35.729 ********** 2025-06-03 15:33:50.150583 | orchestrator | changed: [testbed-manager] 2025-06-03 15:33:50.150591 | orchestrator | 2025-06-03 15:33:50.150600 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-03 15:33:50.150609 | orchestrator | Tuesday 03 June 2025 15:31:49 +0000 (0:00:00.769) 0:02:36.498 ********** 2025-06-03 15:33:50.150618 | orchestrator | ok: [testbed-manager] 2025-06-03 15:33:50.150627 | orchestrator | 2025-06-03 15:33:50.150636 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-03 15:33:50.150662 | orchestrator | Tuesday 03 June 2025 15:31:49 +0000 (0:00:00.467) 0:02:36.965 ********** 2025-06-03 15:33:50.150671 | orchestrator | changed: [testbed-manager] 2025-06-03 15:33:50.150680 | orchestrator | 2025-06-03 15:33:50.150689 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-03 15:33:50.150698 | orchestrator | Tuesday 03 June 2025 15:31:57 +0000 (0:00:08.356) 0:02:45.322 ********** 2025-06-03 15:33:50.150706 | orchestrator | changed: [testbed-manager] 2025-06-03 15:33:50.150716 | orchestrator | 2025-06-03 15:33:50.150725 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-03 15:33:50.150733 | orchestrator | Tuesday 03 June 2025 15:32:11 +0000 (0:00:13.980) 0:02:59.302 ********** 2025-06-03 15:33:50.150742 | orchestrator | ok: [testbed-manager] 2025-06-03 15:33:50.150750 | orchestrator | 2025-06-03 15:33:50.150759 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-03 15:33:50.150768 | orchestrator | 2025-06-03 15:33:50.150776 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-03 15:33:50.150784 | orchestrator | Tuesday 03 June 2025 15:32:12 +0000 (0:00:00.668) 0:02:59.970 ********** 2025-06-03 15:33:50.150792 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.150799 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.150806 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.150814 | orchestrator | 2025-06-03 15:33:50.150821 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-03 15:33:50.150828 | orchestrator | Tuesday 03 June 2025 15:32:13 +0000 (0:00:00.739) 0:03:00.709 ********** 2025-06-03 15:33:50.150836 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.150844 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.150852 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.150860 | orchestrator | 2025-06-03 15:33:50.150867 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-03 15:33:50.150879 | orchestrator | Tuesday 03 June 2025 15:32:13 +0000 (0:00:00.409) 0:03:01.118 ********** 2025-06-03 15:33:50.150884 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:33:50.150890 | orchestrator | 2025-06-03 15:33:50.150895 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-03 15:33:50.150900 | orchestrator | Tuesday 03 June 2025 15:32:14 +0000 (0:00:00.623) 0:03:01.742 ********** 2025-06-03 15:33:50.150904 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-03 15:33:50.150909 | orchestrator | 2025-06-03 15:33:50.150920 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-03 15:33:50.150925 | orchestrator | Tuesday 03 June 2025 15:32:15 +0000 (0:00:01.272) 0:03:03.014 ********** 2025-06-03 15:33:50.150929 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:33:50.150934 | orchestrator | 2025-06-03 15:33:50.150939 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-03 15:33:50.150944 | orchestrator | Tuesday 03 June 2025 15:32:16 +0000 (0:00:00.867) 0:03:03.882 ********** 2025-06-03 15:33:50.150949 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.150953 | orchestrator | 2025-06-03 15:33:50.150958 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-03 15:33:50.150963 | orchestrator | Tuesday 03 June 2025 15:32:16 +0000 (0:00:00.235) 0:03:04.117 ********** 2025-06-03 15:33:50.150968 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:33:50.150973 | orchestrator | 2025-06-03 15:33:50.150977 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-03 15:33:50.150982 | orchestrator | Tuesday 03 June 2025 15:32:17 +0000 (0:00:01.097) 0:03:05.214 ********** 2025-06-03 15:33:50.150987 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.150992 | orchestrator | 2025-06-03 15:33:50.150997 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-03 15:33:50.151002 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.234) 0:03:05.449 ********** 2025-06-03 15:33:50.151006 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.151011 | orchestrator | 2025-06-03 15:33:50.151016 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-03 15:33:50.151021 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.211) 0:03:05.660 ********** 2025-06-03 15:33:50.151026 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.151031 | orchestrator | 2025-06-03 15:33:50.151035 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-03 15:33:50.151040 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.218) 0:03:05.879 ********** 2025-06-03 15:33:50.151045 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.151050 | orchestrator | 2025-06-03 15:33:50.151055 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-03 15:33:50.151059 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.199) 0:03:06.078 ********** 2025-06-03 15:33:50.151064 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-03 15:33:50.151069 | orchestrator | 2025-06-03 15:33:50.151074 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-03 15:33:50.151078 | orchestrator | Tuesday 03 June 2025 15:32:23 +0000 (0:00:05.203) 0:03:11.282 ********** 2025-06-03 15:33:50.151088 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-03 15:33:50.151093 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-03 15:33:50.151098 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-03 15:33:50.151103 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-03 15:33:50.151108 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-03 15:33:50.151112 | orchestrator | 2025-06-03 15:33:50.151117 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-03 15:33:50.151126 | orchestrator | Tuesday 03 June 2025 15:33:18 +0000 (0:00:54.891) 0:04:06.174 ********** 2025-06-03 15:33:50.151131 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:33:50.151135 | orchestrator | 2025-06-03 15:33:50.151140 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-03 15:33:50.151145 | orchestrator | Tuesday 03 June 2025 15:33:20 +0000 (0:00:01.744) 0:04:07.918 ********** 2025-06-03 15:33:50.151150 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-03 15:33:50.151154 | orchestrator | 2025-06-03 15:33:50.151159 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-03 15:33:50.151164 | orchestrator | Tuesday 03 June 2025 15:33:22 +0000 (0:00:02.329) 0:04:10.247 ********** 2025-06-03 15:33:50.151169 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-03 15:33:50.151174 | orchestrator | 2025-06-03 15:33:50.151179 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-03 15:33:50.151184 | orchestrator | Tuesday 03 June 2025 15:33:24 +0000 (0:00:01.865) 0:04:12.112 ********** 2025-06-03 15:33:50.151189 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.151194 | orchestrator | 2025-06-03 15:33:50.151198 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-03 15:33:50.151203 | orchestrator | Tuesday 03 June 2025 15:33:24 +0000 (0:00:00.226) 0:04:12.339 ********** 2025-06-03 15:33:50.151208 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-03 15:33:50.151213 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-03 15:33:50.151218 | orchestrator | 2025-06-03 15:33:50.151222 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-03 15:33:50.151227 | orchestrator | Tuesday 03 June 2025 15:33:27 +0000 (0:00:02.216) 0:04:14.556 ********** 2025-06-03 15:33:50.151232 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.151237 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.151242 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.151246 | orchestrator | 2025-06-03 15:33:50.151251 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-03 15:33:50.151256 | orchestrator | Tuesday 03 June 2025 15:33:27 +0000 (0:00:00.335) 0:04:14.891 ********** 2025-06-03 15:33:50.151261 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.151266 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.151271 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.151275 | orchestrator | 2025-06-03 15:33:50.151280 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-03 15:33:50.151285 | orchestrator | 2025-06-03 15:33:50.151357 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-03 15:33:50.151368 | orchestrator | Tuesday 03 June 2025 15:33:28 +0000 (0:00:00.888) 0:04:15.780 ********** 2025-06-03 15:33:50.151373 | orchestrator | ok: [testbed-manager] 2025-06-03 15:33:50.151378 | orchestrator | 2025-06-03 15:33:50.151383 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-03 15:33:50.151387 | orchestrator | Tuesday 03 June 2025 15:33:28 +0000 (0:00:00.289) 0:04:16.070 ********** 2025-06-03 15:33:50.151392 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-03 15:33:50.151397 | orchestrator | 2025-06-03 15:33:50.151402 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-03 15:33:50.151406 | orchestrator | Tuesday 03 June 2025 15:33:28 +0000 (0:00:00.193) 0:04:16.264 ********** 2025-06-03 15:33:50.151411 | orchestrator | changed: [testbed-manager] 2025-06-03 15:33:50.151416 | orchestrator | 2025-06-03 15:33:50.151421 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-03 15:33:50.151425 | orchestrator | 2025-06-03 15:33:50.151430 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-03 15:33:50.151435 | orchestrator | Tuesday 03 June 2025 15:33:33 +0000 (0:00:05.024) 0:04:21.288 ********** 2025-06-03 15:33:50.151445 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:33:50.151450 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:33:50.151455 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:33:50.151460 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:50.151465 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:50.151470 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:50.151474 | orchestrator | 2025-06-03 15:33:50.151479 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-03 15:33:50.151484 | orchestrator | Tuesday 03 June 2025 15:33:34 +0000 (0:00:00.551) 0:04:21.840 ********** 2025-06-03 15:33:50.151489 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-03 15:33:50.151494 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-03 15:33:50.151499 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-03 15:33:50.151503 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-03 15:33:50.151508 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-03 15:33:50.151513 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-03 15:33:50.151521 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-03 15:33:50.151526 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-03 15:33:50.151531 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-03 15:33:50.151536 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-03 15:33:50.151541 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-03 15:33:50.151546 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-03 15:33:50.151551 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-03 15:33:50.151555 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-03 15:33:50.151560 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-03 15:33:50.151565 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-03 15:33:50.151570 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-03 15:33:50.151575 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-03 15:33:50.151579 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-03 15:33:50.151584 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-03 15:33:50.151589 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-03 15:33:50.151594 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-03 15:33:50.151599 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-03 15:33:50.151603 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-03 15:33:50.151608 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-03 15:33:50.151613 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-03 15:33:50.151618 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-03 15:33:50.151623 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-03 15:33:50.151627 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-03 15:33:50.151636 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-03 15:33:50.151641 | orchestrator | 2025-06-03 15:33:50.151666 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-03 15:33:50.151674 | orchestrator | Tuesday 03 June 2025 15:33:46 +0000 (0:00:12.248) 0:04:34.088 ********** 2025-06-03 15:33:50.151687 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.151695 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.151702 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.151709 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.151717 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.151724 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.151729 | orchestrator | 2025-06-03 15:33:50.151734 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-03 15:33:50.151739 | orchestrator | Tuesday 03 June 2025 15:33:47 +0000 (0:00:00.453) 0:04:34.542 ********** 2025-06-03 15:33:50.151743 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:50.151748 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:50.151753 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:50.151758 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:50.151762 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:50.151767 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:50.151772 | orchestrator | 2025-06-03 15:33:50.151777 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:33:50.151782 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:33:50.151789 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-03 15:33:50.151834 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-03 15:33:50.151841 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-03 15:33:50.151846 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-03 15:33:50.151851 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-03 15:33:50.151856 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-03 15:33:50.151861 | orchestrator | 2025-06-03 15:33:50.151866 | orchestrator | 2025-06-03 15:33:50.151874 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:33:50.151963 | orchestrator | Tuesday 03 June 2025 15:33:47 +0000 (0:00:00.574) 0:04:35.117 ********** 2025-06-03 15:33:50.151972 | orchestrator | =============================================================================== 2025-06-03 15:33:50.151977 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.90s 2025-06-03 15:33:50.151982 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 54.89s 2025-06-03 15:33:50.151987 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 17.91s 2025-06-03 15:33:50.151992 | orchestrator | kubectl : Install required packages ------------------------------------ 13.98s 2025-06-03 15:33:50.151997 | orchestrator | Manage labels ---------------------------------------------------------- 12.25s 2025-06-03 15:33:50.152002 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.01s 2025-06-03 15:33:50.152007 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.36s 2025-06-03 15:33:50.152018 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.99s 2025-06-03 15:33:50.152023 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.20s 2025-06-03 15:33:50.152028 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.02s 2025-06-03 15:33:50.152033 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.18s 2025-06-03 15:33:50.152037 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.97s 2025-06-03 15:33:50.152043 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.97s 2025-06-03 15:33:50.152047 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.46s 2025-06-03 15:33:50.152052 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.33s 2025-06-03 15:33:50.152057 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.22s 2025-06-03 15:33:50.152062 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 1.87s 2025-06-03 15:33:50.152067 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.83s 2025-06-03 15:33:50.152072 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 1.74s 2025-06-03 15:33:50.152076 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.74s 2025-06-03 15:33:50.152081 | orchestrator | 2025-06-03 15:33:50 | INFO  | Task 4714d4a8-3b8d-4692-86e4-751a2f36680b is in state SUCCESS 2025-06-03 15:33:50.152087 | orchestrator | 2025-06-03 15:33:50 | INFO  | Task 2936837e-6cdd-4605-afa8-816c890156d5 is in state STARTED 2025-06-03 15:33:50.152092 | orchestrator | 2025-06-03 15:33:50 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:50.152101 | orchestrator | 2025-06-03 15:33:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:53.206551 | orchestrator | 2025-06-03 15:33:53 | INFO  | Task f7b17fc5-09cc-4239-98a1-e6e60221b5cd is in state STARTED 2025-06-03 15:33:53.208615 | orchestrator | 2025-06-03 15:33:53 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:53.209076 | orchestrator | 2025-06-03 15:33:53 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:53.210979 | orchestrator | 2025-06-03 15:33:53 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:53.212047 | orchestrator | 2025-06-03 15:33:53 | INFO  | Task 2936837e-6cdd-4605-afa8-816c890156d5 is in state STARTED 2025-06-03 15:33:53.215235 | orchestrator | 2025-06-03 15:33:53 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:53.215310 | orchestrator | 2025-06-03 15:33:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:56.262226 | orchestrator | 2025-06-03 15:33:56 | INFO  | Task f7b17fc5-09cc-4239-98a1-e6e60221b5cd is in state STARTED 2025-06-03 15:33:56.263597 | orchestrator | 2025-06-03 15:33:56 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:56.265831 | orchestrator | 2025-06-03 15:33:56 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:56.267091 | orchestrator | 2025-06-03 15:33:56 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:56.268078 | orchestrator | 2025-06-03 15:33:56 | INFO  | Task 2936837e-6cdd-4605-afa8-816c890156d5 is in state SUCCESS 2025-06-03 15:33:56.270003 | orchestrator | 2025-06-03 15:33:56 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:56.270301 | orchestrator | 2025-06-03 15:33:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:59.349584 | orchestrator | 2025-06-03 15:33:59 | INFO  | Task f7b17fc5-09cc-4239-98a1-e6e60221b5cd is in state STARTED 2025-06-03 15:33:59.352775 | orchestrator | 2025-06-03 15:33:59 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:33:59.356713 | orchestrator | 2025-06-03 15:33:59 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:33:59.365184 | orchestrator | 2025-06-03 15:33:59 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:33:59.374882 | orchestrator | 2025-06-03 15:33:59 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:33:59.374999 | orchestrator | 2025-06-03 15:33:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:02.430984 | orchestrator | 2025-06-03 15:34:02 | INFO  | Task f7b17fc5-09cc-4239-98a1-e6e60221b5cd is in state SUCCESS 2025-06-03 15:34:02.433348 | orchestrator | 2025-06-03 15:34:02 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:02.436505 | orchestrator | 2025-06-03 15:34:02 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:02.437316 | orchestrator | 2025-06-03 15:34:02 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:02.438907 | orchestrator | 2025-06-03 15:34:02 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:02.438966 | orchestrator | 2025-06-03 15:34:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:05.497312 | orchestrator | 2025-06-03 15:34:05 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:05.497934 | orchestrator | 2025-06-03 15:34:05 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:05.507353 | orchestrator | 2025-06-03 15:34:05 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:05.510172 | orchestrator | 2025-06-03 15:34:05 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:05.510223 | orchestrator | 2025-06-03 15:34:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:08.557385 | orchestrator | 2025-06-03 15:34:08 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:08.558932 | orchestrator | 2025-06-03 15:34:08 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:08.561089 | orchestrator | 2025-06-03 15:34:08 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:08.564288 | orchestrator | 2025-06-03 15:34:08 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:08.564329 | orchestrator | 2025-06-03 15:34:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:11.607558 | orchestrator | 2025-06-03 15:34:11 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:11.610470 | orchestrator | 2025-06-03 15:34:11 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:11.612354 | orchestrator | 2025-06-03 15:34:11 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:11.613976 | orchestrator | 2025-06-03 15:34:11 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:11.614695 | orchestrator | 2025-06-03 15:34:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:14.651110 | orchestrator | 2025-06-03 15:34:14 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:14.657720 | orchestrator | 2025-06-03 15:34:14 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:14.659785 | orchestrator | 2025-06-03 15:34:14 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:14.661082 | orchestrator | 2025-06-03 15:34:14 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:14.661129 | orchestrator | 2025-06-03 15:34:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:17.706098 | orchestrator | 2025-06-03 15:34:17 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:17.708396 | orchestrator | 2025-06-03 15:34:17 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:17.711010 | orchestrator | 2025-06-03 15:34:17 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:17.713395 | orchestrator | 2025-06-03 15:34:17 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:17.713457 | orchestrator | 2025-06-03 15:34:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:20.755842 | orchestrator | 2025-06-03 15:34:20 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:20.756555 | orchestrator | 2025-06-03 15:34:20 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:20.758760 | orchestrator | 2025-06-03 15:34:20 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:20.760581 | orchestrator | 2025-06-03 15:34:20 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:20.760645 | orchestrator | 2025-06-03 15:34:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:23.805173 | orchestrator | 2025-06-03 15:34:23 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:23.807346 | orchestrator | 2025-06-03 15:34:23 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:23.808093 | orchestrator | 2025-06-03 15:34:23 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:23.809396 | orchestrator | 2025-06-03 15:34:23 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:23.809451 | orchestrator | 2025-06-03 15:34:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:26.848488 | orchestrator | 2025-06-03 15:34:26 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:26.850069 | orchestrator | 2025-06-03 15:34:26 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:26.850902 | orchestrator | 2025-06-03 15:34:26 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:26.853251 | orchestrator | 2025-06-03 15:34:26 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:26.853290 | orchestrator | 2025-06-03 15:34:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:29.889584 | orchestrator | 2025-06-03 15:34:29 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:29.892547 | orchestrator | 2025-06-03 15:34:29 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:29.893296 | orchestrator | 2025-06-03 15:34:29 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:29.897167 | orchestrator | 2025-06-03 15:34:29 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:29.898367 | orchestrator | 2025-06-03 15:34:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:32.934376 | orchestrator | 2025-06-03 15:34:32 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:32.936982 | orchestrator | 2025-06-03 15:34:32 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:32.938331 | orchestrator | 2025-06-03 15:34:32 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:32.940020 | orchestrator | 2025-06-03 15:34:32 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:32.940309 | orchestrator | 2025-06-03 15:34:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:36.007848 | orchestrator | 2025-06-03 15:34:36 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:36.008755 | orchestrator | 2025-06-03 15:34:36 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:36.012795 | orchestrator | 2025-06-03 15:34:36 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:36.015293 | orchestrator | 2025-06-03 15:34:36 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:36.016022 | orchestrator | 2025-06-03 15:34:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:39.060160 | orchestrator | 2025-06-03 15:34:39 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:39.062334 | orchestrator | 2025-06-03 15:34:39 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:39.063365 | orchestrator | 2025-06-03 15:34:39 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:39.064449 | orchestrator | 2025-06-03 15:34:39 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:39.064863 | orchestrator | 2025-06-03 15:34:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:42.106504 | orchestrator | 2025-06-03 15:34:42 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:42.107595 | orchestrator | 2025-06-03 15:34:42 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:42.108928 | orchestrator | 2025-06-03 15:34:42 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:42.110278 | orchestrator | 2025-06-03 15:34:42 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:42.110324 | orchestrator | 2025-06-03 15:34:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:45.157483 | orchestrator | 2025-06-03 15:34:45 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state STARTED 2025-06-03 15:34:45.158443 | orchestrator | 2025-06-03 15:34:45 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:45.159382 | orchestrator | 2025-06-03 15:34:45 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:45.160220 | orchestrator | 2025-06-03 15:34:45 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:45.160267 | orchestrator | 2025-06-03 15:34:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:48.203831 | orchestrator | 2025-06-03 15:34:48 | INFO  | Task 645e5140-7f71-471f-a497-979ce3363128 is in state SUCCESS 2025-06-03 15:34:48.205136 | orchestrator | 2025-06-03 15:34:48.205190 | orchestrator | 2025-06-03 15:34:48.205200 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-03 15:34:48.205210 | orchestrator | 2025-06-03 15:34:48.205219 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-03 15:34:48.205247 | orchestrator | Tuesday 03 June 2025 15:33:53 +0000 (0:00:00.168) 0:00:00.168 ********** 2025-06-03 15:34:48.205256 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-03 15:34:48.205265 | orchestrator | 2025-06-03 15:34:48.205273 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-03 15:34:48.205281 | orchestrator | Tuesday 03 June 2025 15:33:54 +0000 (0:00:00.861) 0:00:01.030 ********** 2025-06-03 15:34:48.205290 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:48.205298 | orchestrator | 2025-06-03 15:34:48.205306 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-03 15:34:48.205315 | orchestrator | Tuesday 03 June 2025 15:33:55 +0000 (0:00:01.205) 0:00:02.236 ********** 2025-06-03 15:34:48.205323 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:48.205332 | orchestrator | 2025-06-03 15:34:48.205340 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:34:48.205349 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:34:48.205359 | orchestrator | 2025-06-03 15:34:48.205367 | orchestrator | 2025-06-03 15:34:48.205375 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:34:48.205384 | orchestrator | Tuesday 03 June 2025 15:33:55 +0000 (0:00:00.404) 0:00:02.640 ********** 2025-06-03 15:34:48.205392 | orchestrator | =============================================================================== 2025-06-03 15:34:48.205401 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.21s 2025-06-03 15:34:48.205410 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.86s 2025-06-03 15:34:48.205418 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2025-06-03 15:34:48.205426 | orchestrator | 2025-06-03 15:34:48.205434 | orchestrator | 2025-06-03 15:34:48.205442 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-03 15:34:48.205451 | orchestrator | 2025-06-03 15:34:48.205459 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-03 15:34:48.205467 | orchestrator | Tuesday 03 June 2025 15:33:53 +0000 (0:00:00.205) 0:00:00.205 ********** 2025-06-03 15:34:48.205476 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:48.205485 | orchestrator | 2025-06-03 15:34:48.205494 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-03 15:34:48.205502 | orchestrator | Tuesday 03 June 2025 15:33:53 +0000 (0:00:00.588) 0:00:00.794 ********** 2025-06-03 15:34:48.205510 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:48.205519 | orchestrator | 2025-06-03 15:34:48.205527 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-03 15:34:48.205613 | orchestrator | Tuesday 03 June 2025 15:33:54 +0000 (0:00:00.585) 0:00:01.380 ********** 2025-06-03 15:34:48.205622 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-03 15:34:48.205631 | orchestrator | 2025-06-03 15:34:48.205639 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-03 15:34:48.205648 | orchestrator | Tuesday 03 June 2025 15:33:55 +0000 (0:00:00.765) 0:00:02.145 ********** 2025-06-03 15:34:48.205681 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:48.205690 | orchestrator | 2025-06-03 15:34:48.205698 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-03 15:34:48.205707 | orchestrator | Tuesday 03 June 2025 15:33:56 +0000 (0:00:01.089) 0:00:03.235 ********** 2025-06-03 15:34:48.205716 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:48.205723 | orchestrator | 2025-06-03 15:34:48.205729 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-03 15:34:48.205735 | orchestrator | Tuesday 03 June 2025 15:33:57 +0000 (0:00:01.044) 0:00:04.280 ********** 2025-06-03 15:34:48.205741 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-03 15:34:48.205747 | orchestrator | 2025-06-03 15:34:48.205764 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-03 15:34:48.205784 | orchestrator | Tuesday 03 June 2025 15:33:59 +0000 (0:00:02.387) 0:00:06.667 ********** 2025-06-03 15:34:48.205792 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-03 15:34:48.205801 | orchestrator | 2025-06-03 15:34:48.205809 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-03 15:34:48.205817 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:01.198) 0:00:07.866 ********** 2025-06-03 15:34:48.205826 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:48.205834 | orchestrator | 2025-06-03 15:34:48.205841 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-03 15:34:48.205848 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:00.468) 0:00:08.335 ********** 2025-06-03 15:34:48.205854 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:48.205859 | orchestrator | 2025-06-03 15:34:48.205865 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:34:48.205871 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:34:48.205877 | orchestrator | 2025-06-03 15:34:48.205883 | orchestrator | 2025-06-03 15:34:48.205889 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:34:48.205895 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:00.306) 0:00:08.641 ********** 2025-06-03 15:34:48.205901 | orchestrator | =============================================================================== 2025-06-03 15:34:48.205906 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.39s 2025-06-03 15:34:48.205912 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.20s 2025-06-03 15:34:48.205917 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.09s 2025-06-03 15:34:48.205940 | orchestrator | Change server address in the kubeconfig --------------------------------- 1.04s 2025-06-03 15:34:48.205946 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2025-06-03 15:34:48.205951 | orchestrator | Get home directory of operator user ------------------------------------- 0.59s 2025-06-03 15:34:48.205957 | orchestrator | Create .kube directory -------------------------------------------------- 0.59s 2025-06-03 15:34:48.205963 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.47s 2025-06-03 15:34:48.205968 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2025-06-03 15:34:48.205974 | orchestrator | 2025-06-03 15:34:48.205980 | orchestrator | 2025-06-03 15:34:48.205985 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-03 15:34:48.205991 | orchestrator | 2025-06-03 15:34:48.205997 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-03 15:34:48.206003 | orchestrator | Tuesday 03 June 2025 15:32:26 +0000 (0:00:00.138) 0:00:00.138 ********** 2025-06-03 15:34:48.206009 | orchestrator | ok: [localhost] => { 2025-06-03 15:34:48.206089 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-03 15:34:48.206096 | orchestrator | } 2025-06-03 15:34:48.206102 | orchestrator | 2025-06-03 15:34:48.206108 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-03 15:34:48.206114 | orchestrator | Tuesday 03 June 2025 15:32:26 +0000 (0:00:00.065) 0:00:00.204 ********** 2025-06-03 15:34:48.206121 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-03 15:34:48.206129 | orchestrator | ...ignoring 2025-06-03 15:34:48.206174 | orchestrator | 2025-06-03 15:34:48.206181 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-03 15:34:48.206187 | orchestrator | Tuesday 03 June 2025 15:32:30 +0000 (0:00:03.693) 0:00:03.897 ********** 2025-06-03 15:34:48.206193 | orchestrator | skipping: [localhost] 2025-06-03 15:34:48.206198 | orchestrator | 2025-06-03 15:34:48.206203 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-03 15:34:48.206217 | orchestrator | Tuesday 03 June 2025 15:32:30 +0000 (0:00:00.242) 0:00:04.139 ********** 2025-06-03 15:34:48.206222 | orchestrator | ok: [localhost] 2025-06-03 15:34:48.206228 | orchestrator | 2025-06-03 15:34:48.206235 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:34:48.206241 | orchestrator | 2025-06-03 15:34:48.206246 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:34:48.206252 | orchestrator | Tuesday 03 June 2025 15:32:30 +0000 (0:00:00.284) 0:00:04.424 ********** 2025-06-03 15:34:48.206258 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:48.206264 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:48.206270 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:48.206275 | orchestrator | 2025-06-03 15:34:48.206280 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:34:48.206286 | orchestrator | Tuesday 03 June 2025 15:32:31 +0000 (0:00:00.375) 0:00:04.799 ********** 2025-06-03 15:34:48.206292 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-03 15:34:48.206298 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-03 15:34:48.206303 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-03 15:34:48.206309 | orchestrator | 2025-06-03 15:34:48.206314 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-03 15:34:48.206320 | orchestrator | 2025-06-03 15:34:48.206326 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-03 15:34:48.206331 | orchestrator | Tuesday 03 June 2025 15:32:32 +0000 (0:00:00.932) 0:00:05.732 ********** 2025-06-03 15:34:48.206337 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:34:48.206344 | orchestrator | 2025-06-03 15:34:48.206350 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-03 15:34:48.206361 | orchestrator | Tuesday 03 June 2025 15:32:33 +0000 (0:00:01.237) 0:00:06.970 ********** 2025-06-03 15:34:48.206367 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:48.206372 | orchestrator | 2025-06-03 15:34:48.206378 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-03 15:34:48.206439 | orchestrator | Tuesday 03 June 2025 15:32:34 +0000 (0:00:01.278) 0:00:08.248 ********** 2025-06-03 15:34:48.206446 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:48.206453 | orchestrator | 2025-06-03 15:34:48.206459 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-03 15:34:48.206465 | orchestrator | Tuesday 03 June 2025 15:32:34 +0000 (0:00:00.372) 0:00:08.621 ********** 2025-06-03 15:34:48.206471 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:48.206477 | orchestrator | 2025-06-03 15:34:48.206483 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-03 15:34:48.206488 | orchestrator | Tuesday 03 June 2025 15:32:35 +0000 (0:00:00.376) 0:00:08.998 ********** 2025-06-03 15:34:48.206494 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:48.206500 | orchestrator | 2025-06-03 15:34:48.206506 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-03 15:34:48.206512 | orchestrator | Tuesday 03 June 2025 15:32:35 +0000 (0:00:00.371) 0:00:09.369 ********** 2025-06-03 15:34:48.206518 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:48.206523 | orchestrator | 2025-06-03 15:34:48.206529 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-03 15:34:48.206536 | orchestrator | Tuesday 03 June 2025 15:32:36 +0000 (0:00:00.470) 0:00:09.840 ********** 2025-06-03 15:34:48.206542 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:34:48.206548 | orchestrator | 2025-06-03 15:34:48.206554 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-03 15:34:48.206572 | orchestrator | Tuesday 03 June 2025 15:32:37 +0000 (0:00:00.890) 0:00:10.730 ********** 2025-06-03 15:34:48.206586 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:48.206592 | orchestrator | 2025-06-03 15:34:48.206599 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-03 15:34:48.206605 | orchestrator | Tuesday 03 June 2025 15:32:37 +0000 (0:00:00.909) 0:00:11.640 ********** 2025-06-03 15:34:48.206611 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:48.206617 | orchestrator | 2025-06-03 15:34:48.206623 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-03 15:34:48.206629 | orchestrator | Tuesday 03 June 2025 15:32:38 +0000 (0:00:00.355) 0:00:11.996 ********** 2025-06-03 15:34:48.206635 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:48.206641 | orchestrator | 2025-06-03 15:34:48.206647 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-03 15:34:48.206722 | orchestrator | Tuesday 03 June 2025 15:32:38 +0000 (0:00:00.356) 0:00:12.352 ********** 2025-06-03 15:34:48.206738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:48.206747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:48.206760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:48.206773 | orchestrator | 2025-06-03 15:34:48.206778 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-03 15:34:48.206784 | orchestrator | Tuesday 03 June 2025 15:32:40 +0000 (0:00:01.625) 0:00:13.978 ********** 2025-06-03 15:34:48.206799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:48.206805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:48.206815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:48.206822 | orchestrator | 2025-06-03 15:34:48.206827 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-03 15:34:48.206833 | orchestrator | Tuesday 03 June 2025 15:32:42 +0000 (0:00:02.283) 0:00:16.261 ********** 2025-06-03 15:34:48.206839 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-03 15:34:48.206845 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-03 15:34:48.206854 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-03 15:34:48.206860 | orchestrator | 2025-06-03 15:34:48.206866 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-03 15:34:48.206871 | orchestrator | Tuesday 03 June 2025 15:32:44 +0000 (0:00:01.511) 0:00:17.772 ********** 2025-06-03 15:34:48.206876 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-03 15:34:48.206882 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-03 15:34:48.206887 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-03 15:34:48.206893 | orchestrator | 2025-06-03 15:34:48.206903 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-03 15:34:48.206909 | orchestrator | Tuesday 03 June 2025 15:32:46 +0000 (0:00:02.182) 0:00:19.954 ********** 2025-06-03 15:34:48.206915 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-03 15:34:48.206920 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-03 15:34:48.206926 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-03 15:34:48.206931 | orchestrator | 2025-06-03 15:34:48.206936 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-03 15:34:48.206942 | orchestrator | Tuesday 03 June 2025 15:32:47 +0000 (0:00:01.516) 0:00:21.471 ********** 2025-06-03 15:34:48.206948 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-03 15:34:48.206953 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-03 15:34:48.206959 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-03 15:34:48.206964 | orchestrator | 2025-06-03 15:34:48.206970 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-03 15:34:48.206975 | orchestrator | Tuesday 03 June 2025 15:32:50 +0000 (0:00:02.375) 0:00:23.846 ********** 2025-06-03 15:34:48.206981 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-03 15:34:48.206987 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-03 15:34:48.206993 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-03 15:34:48.206998 | orchestrator | 2025-06-03 15:34:48.207004 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-03 15:34:48.207009 | orchestrator | Tuesday 03 June 2025 15:32:52 +0000 (0:00:01.943) 0:00:25.790 ********** 2025-06-03 15:34:48.207015 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-03 15:34:48.207021 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-03 15:34:48.207026 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-03 15:34:48.207032 | orchestrator | 2025-06-03 15:34:48.207037 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-03 15:34:48.207043 | orchestrator | Tuesday 03 June 2025 15:32:53 +0000 (0:00:01.380) 0:00:27.170 ********** 2025-06-03 15:34:48.207049 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:48.207055 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:48.207060 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:48.207066 | orchestrator | 2025-06-03 15:34:48.207071 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-03 15:34:48.207077 | orchestrator | Tuesday 03 June 2025 15:32:53 +0000 (0:00:00.385) 0:00:27.555 ********** 2025-06-03 15:34:48.207145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:48.207172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:48.207180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:48.207187 | orchestrator | 2025-06-03 15:34:48.207192 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-03 15:34:48.207198 | orchestrator | Tuesday 03 June 2025 15:32:55 +0000 (0:00:01.612) 0:00:29.168 ********** 2025-06-03 15:34:48.207203 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:48.207210 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:48.207216 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:48.207221 | orchestrator | 2025-06-03 15:34:48.207227 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-03 15:34:48.207233 | orchestrator | Tuesday 03 June 2025 15:32:56 +0000 (0:00:00.890) 0:00:30.058 ********** 2025-06-03 15:34:48.207238 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:48.207244 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:48.207257 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:48.207263 | orchestrator | 2025-06-03 15:34:48.207269 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-03 15:34:48.207274 | orchestrator | Tuesday 03 June 2025 15:33:03 +0000 (0:00:07.262) 0:00:37.321 ********** 2025-06-03 15:34:48.207280 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:48.207285 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:48.207291 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:48.207297 | orchestrator | 2025-06-03 15:34:48.207302 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-03 15:34:48.207308 | orchestrator | 2025-06-03 15:34:48.207314 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-03 15:34:48.207320 | orchestrator | Tuesday 03 June 2025 15:33:04 +0000 (0:00:00.440) 0:00:37.761 ********** 2025-06-03 15:34:48.207325 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:48.207331 | orchestrator | 2025-06-03 15:34:48.207336 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-03 15:34:48.207341 | orchestrator | Tuesday 03 June 2025 15:33:04 +0000 (0:00:00.621) 0:00:38.383 ********** 2025-06-03 15:34:48.207347 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:48.207352 | orchestrator | 2025-06-03 15:34:48.207357 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-03 15:34:48.207363 | orchestrator | Tuesday 03 June 2025 15:33:04 +0000 (0:00:00.233) 0:00:38.616 ********** 2025-06-03 15:34:48.207372 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:48.207378 | orchestrator | 2025-06-03 15:34:48.207384 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-03 15:34:48.207389 | orchestrator | Tuesday 03 June 2025 15:33:06 +0000 (0:00:01.726) 0:00:40.343 ********** 2025-06-03 15:34:48.207395 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:48.207401 | orchestrator | 2025-06-03 15:34:48.207406 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-03 15:34:48.207412 | orchestrator | 2025-06-03 15:34:48.207418 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-03 15:34:48.207424 | orchestrator | Tuesday 03 June 2025 15:34:03 +0000 (0:00:57.234) 0:01:37.578 ********** 2025-06-03 15:34:48.207429 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:48.207435 | orchestrator | 2025-06-03 15:34:48.207441 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-03 15:34:48.207447 | orchestrator | Tuesday 03 June 2025 15:34:04 +0000 (0:00:00.659) 0:01:38.238 ********** 2025-06-03 15:34:48.207452 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:48.207458 | orchestrator | 2025-06-03 15:34:48.207463 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-03 15:34:48.207469 | orchestrator | Tuesday 03 June 2025 15:34:05 +0000 (0:00:00.488) 0:01:38.727 ********** 2025-06-03 15:34:48.207474 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:48.207480 | orchestrator | 2025-06-03 15:34:48.207486 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-03 15:34:48.207492 | orchestrator | Tuesday 03 June 2025 15:34:12 +0000 (0:00:07.105) 0:01:45.832 ********** 2025-06-03 15:34:48.207497 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:48.207503 | orchestrator | 2025-06-03 15:34:48.207508 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-03 15:34:48.207514 | orchestrator | 2025-06-03 15:34:48.207519 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-03 15:34:48.207532 | orchestrator | Tuesday 03 June 2025 15:34:24 +0000 (0:00:11.987) 0:01:57.819 ********** 2025-06-03 15:34:48.207537 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:48.207543 | orchestrator | 2025-06-03 15:34:48.207548 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-03 15:34:48.207554 | orchestrator | Tuesday 03 June 2025 15:34:24 +0000 (0:00:00.701) 0:01:58.521 ********** 2025-06-03 15:34:48.207560 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:48.207572 | orchestrator | 2025-06-03 15:34:48.207577 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-03 15:34:48.207583 | orchestrator | Tuesday 03 June 2025 15:34:25 +0000 (0:00:00.386) 0:01:58.908 ********** 2025-06-03 15:34:48.207589 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:48.207594 | orchestrator | 2025-06-03 15:34:48.207600 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-03 15:34:48.207606 | orchestrator | Tuesday 03 June 2025 15:34:27 +0000 (0:00:01.925) 0:02:00.833 ********** 2025-06-03 15:34:48.207611 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:48.207617 | orchestrator | 2025-06-03 15:34:48.207623 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-03 15:34:48.207708 | orchestrator | 2025-06-03 15:34:48.207719 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-03 15:34:48.207725 | orchestrator | Tuesday 03 June 2025 15:34:43 +0000 (0:00:16.383) 0:02:17.216 ********** 2025-06-03 15:34:48.207731 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:34:48.207736 | orchestrator | 2025-06-03 15:34:48.207742 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-03 15:34:48.207748 | orchestrator | Tuesday 03 June 2025 15:34:44 +0000 (0:00:00.791) 0:02:18.008 ********** 2025-06-03 15:34:48.207754 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-03 15:34:48.207760 | orchestrator | enable_outward_rabbitmq_True 2025-06-03 15:34:48.207766 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-03 15:34:48.207772 | orchestrator | outward_rabbitmq_restart 2025-06-03 15:34:48.207777 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:48.207783 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:48.207788 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:48.207793 | orchestrator | 2025-06-03 15:34:48.207799 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-03 15:34:48.207804 | orchestrator | skipping: no hosts matched 2025-06-03 15:34:48.207809 | orchestrator | 2025-06-03 15:34:48.207814 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-03 15:34:48.207820 | orchestrator | skipping: no hosts matched 2025-06-03 15:34:48.207825 | orchestrator | 2025-06-03 15:34:48.207830 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-03 15:34:48.207835 | orchestrator | skipping: no hosts matched 2025-06-03 15:34:48.207841 | orchestrator | 2025-06-03 15:34:48.207847 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:34:48.207854 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-03 15:34:48.207862 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-03 15:34:48.207869 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:34:48.207875 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:34:48.207880 | orchestrator | 2025-06-03 15:34:48.207887 | orchestrator | 2025-06-03 15:34:48.207892 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:34:48.207898 | orchestrator | Tuesday 03 June 2025 15:34:46 +0000 (0:00:02.546) 0:02:20.555 ********** 2025-06-03 15:34:48.207911 | orchestrator | =============================================================================== 2025-06-03 15:34:48.207917 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 85.61s 2025-06-03 15:34:48.207922 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.76s 2025-06-03 15:34:48.207928 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.26s 2025-06-03 15:34:48.207941 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.69s 2025-06-03 15:34:48.207946 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.55s 2025-06-03 15:34:48.207952 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.38s 2025-06-03 15:34:48.207958 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.28s 2025-06-03 15:34:48.207964 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.18s 2025-06-03 15:34:48.207969 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.98s 2025-06-03 15:34:48.207975 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.94s 2025-06-03 15:34:48.207980 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.63s 2025-06-03 15:34:48.207986 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.61s 2025-06-03 15:34:48.207992 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.52s 2025-06-03 15:34:48.207998 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.51s 2025-06-03 15:34:48.208003 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.38s 2025-06-03 15:34:48.208017 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.28s 2025-06-03 15:34:48.208023 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.24s 2025-06-03 15:34:48.208029 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.11s 2025-06-03 15:34:48.208034 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2025-06-03 15:34:48.208040 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.91s 2025-06-03 15:34:48.208046 | orchestrator | 2025-06-03 15:34:48 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:48.208149 | orchestrator | 2025-06-03 15:34:48 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:48.208157 | orchestrator | 2025-06-03 15:34:48 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:48.208163 | orchestrator | 2025-06-03 15:34:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:51.254731 | orchestrator | 2025-06-03 15:34:51 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:51.256460 | orchestrator | 2025-06-03 15:34:51 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:51.257427 | orchestrator | 2025-06-03 15:34:51 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:51.257457 | orchestrator | 2025-06-03 15:34:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:54.307491 | orchestrator | 2025-06-03 15:34:54 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:54.312140 | orchestrator | 2025-06-03 15:34:54 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:54.316846 | orchestrator | 2025-06-03 15:34:54 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:54.316918 | orchestrator | 2025-06-03 15:34:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:57.374543 | orchestrator | 2025-06-03 15:34:57 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:34:57.374724 | orchestrator | 2025-06-03 15:34:57 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:34:57.375540 | orchestrator | 2025-06-03 15:34:57 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:34:57.375611 | orchestrator | 2025-06-03 15:34:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:00.444837 | orchestrator | 2025-06-03 15:35:00 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:00.445927 | orchestrator | 2025-06-03 15:35:00 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:00.448190 | orchestrator | 2025-06-03 15:35:00 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:00.448220 | orchestrator | 2025-06-03 15:35:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:03.483569 | orchestrator | 2025-06-03 15:35:03 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:03.488854 | orchestrator | 2025-06-03 15:35:03 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:03.488918 | orchestrator | 2025-06-03 15:35:03 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:03.489260 | orchestrator | 2025-06-03 15:35:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:06.529047 | orchestrator | 2025-06-03 15:35:06 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:06.529238 | orchestrator | 2025-06-03 15:35:06 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:06.531085 | orchestrator | 2025-06-03 15:35:06 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:06.531124 | orchestrator | 2025-06-03 15:35:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:09.568183 | orchestrator | 2025-06-03 15:35:09 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:09.570274 | orchestrator | 2025-06-03 15:35:09 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:09.574115 | orchestrator | 2025-06-03 15:35:09 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:09.574642 | orchestrator | 2025-06-03 15:35:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:12.618610 | orchestrator | 2025-06-03 15:35:12 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:12.619893 | orchestrator | 2025-06-03 15:35:12 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:12.623291 | orchestrator | 2025-06-03 15:35:12 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:12.623334 | orchestrator | 2025-06-03 15:35:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:15.670935 | orchestrator | 2025-06-03 15:35:15 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:15.674979 | orchestrator | 2025-06-03 15:35:15 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:15.677854 | orchestrator | 2025-06-03 15:35:15 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:15.680469 | orchestrator | 2025-06-03 15:35:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:18.707203 | orchestrator | 2025-06-03 15:35:18 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:18.708450 | orchestrator | 2025-06-03 15:35:18 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:18.710052 | orchestrator | 2025-06-03 15:35:18 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:18.710095 | orchestrator | 2025-06-03 15:35:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:21.746602 | orchestrator | 2025-06-03 15:35:21 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:21.748340 | orchestrator | 2025-06-03 15:35:21 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:21.750213 | orchestrator | 2025-06-03 15:35:21 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:21.750256 | orchestrator | 2025-06-03 15:35:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:24.787415 | orchestrator | 2025-06-03 15:35:24 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:24.789194 | orchestrator | 2025-06-03 15:35:24 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:24.791301 | orchestrator | 2025-06-03 15:35:24 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:24.792188 | orchestrator | 2025-06-03 15:35:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:27.838854 | orchestrator | 2025-06-03 15:35:27 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:27.840750 | orchestrator | 2025-06-03 15:35:27 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:27.842124 | orchestrator | 2025-06-03 15:35:27 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:27.842350 | orchestrator | 2025-06-03 15:35:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:30.882123 | orchestrator | 2025-06-03 15:35:30 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:30.882288 | orchestrator | 2025-06-03 15:35:30 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:30.883436 | orchestrator | 2025-06-03 15:35:30 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:30.883499 | orchestrator | 2025-06-03 15:35:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:33.931296 | orchestrator | 2025-06-03 15:35:33 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:33.934784 | orchestrator | 2025-06-03 15:35:33 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:33.937249 | orchestrator | 2025-06-03 15:35:33 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:33.937412 | orchestrator | 2025-06-03 15:35:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:36.991088 | orchestrator | 2025-06-03 15:35:36 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:36.993206 | orchestrator | 2025-06-03 15:35:36 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:36.995396 | orchestrator | 2025-06-03 15:35:36 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:36.995504 | orchestrator | 2025-06-03 15:35:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:40.045751 | orchestrator | 2025-06-03 15:35:40 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:40.045852 | orchestrator | 2025-06-03 15:35:40 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:40.046827 | orchestrator | 2025-06-03 15:35:40 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:40.046847 | orchestrator | 2025-06-03 15:35:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:43.097375 | orchestrator | 2025-06-03 15:35:43 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:43.099590 | orchestrator | 2025-06-03 15:35:43 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:43.099657 | orchestrator | 2025-06-03 15:35:43 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:43.099710 | orchestrator | 2025-06-03 15:35:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:46.131644 | orchestrator | 2025-06-03 15:35:46 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:46.133706 | orchestrator | 2025-06-03 15:35:46 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:46.133756 | orchestrator | 2025-06-03 15:35:46 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:46.133765 | orchestrator | 2025-06-03 15:35:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:49.174374 | orchestrator | 2025-06-03 15:35:49 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:49.174434 | orchestrator | 2025-06-03 15:35:49 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:49.175416 | orchestrator | 2025-06-03 15:35:49 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:49.175491 | orchestrator | 2025-06-03 15:35:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:52.212501 | orchestrator | 2025-06-03 15:35:52 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:52.217333 | orchestrator | 2025-06-03 15:35:52 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:52.217510 | orchestrator | 2025-06-03 15:35:52 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:52.218322 | orchestrator | 2025-06-03 15:35:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:55.253121 | orchestrator | 2025-06-03 15:35:55 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:55.253495 | orchestrator | 2025-06-03 15:35:55 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:55.254535 | orchestrator | 2025-06-03 15:35:55 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:55.254567 | orchestrator | 2025-06-03 15:35:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:58.297084 | orchestrator | 2025-06-03 15:35:58 | INFO  | Task 5fae8146-fd4f-4ca3-89ed-2e8e71a7c409 is in state STARTED 2025-06-03 15:35:58.297312 | orchestrator | 2025-06-03 15:35:58 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:35:58.297643 | orchestrator | 2025-06-03 15:35:58 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:35:58.299383 | orchestrator | 2025-06-03 15:35:58 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:35:58.299441 | orchestrator | 2025-06-03 15:35:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:01.341793 | orchestrator | 2025-06-03 15:36:01 | INFO  | Task 5fae8146-fd4f-4ca3-89ed-2e8e71a7c409 is in state STARTED 2025-06-03 15:36:01.343847 | orchestrator | 2025-06-03 15:36:01 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:01.345885 | orchestrator | 2025-06-03 15:36:01 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:01.347947 | orchestrator | 2025-06-03 15:36:01 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:36:01.348135 | orchestrator | 2025-06-03 15:36:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:04.387972 | orchestrator | 2025-06-03 15:36:04 | INFO  | Task 5fae8146-fd4f-4ca3-89ed-2e8e71a7c409 is in state STARTED 2025-06-03 15:36:04.388197 | orchestrator | 2025-06-03 15:36:04 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:04.388944 | orchestrator | 2025-06-03 15:36:04 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:04.392147 | orchestrator | 2025-06-03 15:36:04 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:36:04.392212 | orchestrator | 2025-06-03 15:36:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:07.440280 | orchestrator | 2025-06-03 15:36:07 | INFO  | Task 5fae8146-fd4f-4ca3-89ed-2e8e71a7c409 is in state STARTED 2025-06-03 15:36:07.440433 | orchestrator | 2025-06-03 15:36:07 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:07.442499 | orchestrator | 2025-06-03 15:36:07 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:07.444265 | orchestrator | 2025-06-03 15:36:07 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:36:07.444348 | orchestrator | 2025-06-03 15:36:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:10.486419 | orchestrator | 2025-06-03 15:36:10 | INFO  | Task 5fae8146-fd4f-4ca3-89ed-2e8e71a7c409 is in state STARTED 2025-06-03 15:36:10.486638 | orchestrator | 2025-06-03 15:36:10 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:10.488443 | orchestrator | 2025-06-03 15:36:10 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:10.489599 | orchestrator | 2025-06-03 15:36:10 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:36:10.489778 | orchestrator | 2025-06-03 15:36:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:13.535594 | orchestrator | 2025-06-03 15:36:13 | INFO  | Task 5fae8146-fd4f-4ca3-89ed-2e8e71a7c409 is in state STARTED 2025-06-03 15:36:13.538009 | orchestrator | 2025-06-03 15:36:13 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:13.538761 | orchestrator | 2025-06-03 15:36:13 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:13.539378 | orchestrator | 2025-06-03 15:36:13 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:36:13.539485 | orchestrator | 2025-06-03 15:36:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:16.579520 | orchestrator | 2025-06-03 15:36:16 | INFO  | Task 5fae8146-fd4f-4ca3-89ed-2e8e71a7c409 is in state SUCCESS 2025-06-03 15:36:16.580301 | orchestrator | 2025-06-03 15:36:16 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:16.581997 | orchestrator | 2025-06-03 15:36:16 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:16.584498 | orchestrator | 2025-06-03 15:36:16 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:36:16.584591 | orchestrator | 2025-06-03 15:36:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:19.618256 | orchestrator | 2025-06-03 15:36:19 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:19.618365 | orchestrator | 2025-06-03 15:36:19 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:19.618935 | orchestrator | 2025-06-03 15:36:19 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:36:19.619018 | orchestrator | 2025-06-03 15:36:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:22.667288 | orchestrator | 2025-06-03 15:36:22 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:22.668387 | orchestrator | 2025-06-03 15:36:22 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:22.669527 | orchestrator | 2025-06-03 15:36:22 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state STARTED 2025-06-03 15:36:22.669769 | orchestrator | 2025-06-03 15:36:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:25.713396 | orchestrator | 2025-06-03 15:36:25 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:25.715771 | orchestrator | 2025-06-03 15:36:25 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:25.715844 | orchestrator | 2025-06-03 15:36:25 | INFO  | Task 00a4582e-b846-4d9b-9738-62107bf0e82b is in state SUCCESS 2025-06-03 15:36:25.718619 | orchestrator | 2025-06-03 15:36:25.718707 | orchestrator | None 2025-06-03 15:36:25.718717 | orchestrator | 2025-06-03 15:36:25.718727 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:36:25.718764 | orchestrator | 2025-06-03 15:36:25.718783 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:36:25.718791 | orchestrator | Tuesday 03 June 2025 15:33:13 +0000 (0:00:00.179) 0:00:00.179 ********** 2025-06-03 15:36:25.718817 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.718827 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.718834 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.718841 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:36:25.718900 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:36:25.718935 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:36:25.718944 | orchestrator | 2025-06-03 15:36:25.718951 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:36:25.718959 | orchestrator | Tuesday 03 June 2025 15:33:14 +0000 (0:00:00.695) 0:00:00.875 ********** 2025-06-03 15:36:25.718967 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-03 15:36:25.718974 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-03 15:36:25.718981 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-03 15:36:25.718989 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-03 15:36:25.718996 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-03 15:36:25.719004 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-03 15:36:25.719011 | orchestrator | 2025-06-03 15:36:25.719019 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-03 15:36:25.719025 | orchestrator | 2025-06-03 15:36:25.719032 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-03 15:36:25.719038 | orchestrator | Tuesday 03 June 2025 15:33:15 +0000 (0:00:01.179) 0:00:02.054 ********** 2025-06-03 15:36:25.719046 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:36:25.719054 | orchestrator | 2025-06-03 15:36:25.719061 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-03 15:36:25.719069 | orchestrator | Tuesday 03 June 2025 15:33:17 +0000 (0:00:01.934) 0:00:03.989 ********** 2025-06-03 15:36:25.719078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719145 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719154 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719184 | orchestrator | 2025-06-03 15:36:25.719191 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-03 15:36:25.719199 | orchestrator | Tuesday 03 June 2025 15:33:19 +0000 (0:00:01.823) 0:00:05.813 ********** 2025-06-03 15:36:25.719206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719249 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719267 | orchestrator | 2025-06-03 15:36:25.719275 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-03 15:36:25.719288 | orchestrator | Tuesday 03 June 2025 15:33:21 +0000 (0:00:02.618) 0:00:08.431 ********** 2025-06-03 15:36:25.719297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719364 | orchestrator | 2025-06-03 15:36:25.719376 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-03 15:36:25.719393 | orchestrator | Tuesday 03 June 2025 15:33:23 +0000 (0:00:01.840) 0:00:10.272 ********** 2025-06-03 15:36:25.719403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719467 | orchestrator | 2025-06-03 15:36:25.719476 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-03 15:36:25.719485 | orchestrator | Tuesday 03 June 2025 15:33:25 +0000 (0:00:01.714) 0:00:11.986 ********** 2025-06-03 15:36:25.719493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719527 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.719558 | orchestrator | 2025-06-03 15:36:25.719567 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-03 15:36:25.719575 | orchestrator | Tuesday 03 June 2025 15:33:26 +0000 (0:00:01.546) 0:00:13.532 ********** 2025-06-03 15:36:25.719584 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:25.719593 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:25.719601 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:36:25.719608 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:25.719615 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:36:25.719622 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:36:25.719630 | orchestrator | 2025-06-03 15:36:25.719637 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-03 15:36:25.719645 | orchestrator | Tuesday 03 June 2025 15:33:29 +0000 (0:00:02.416) 0:00:15.949 ********** 2025-06-03 15:36:25.719652 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-03 15:36:25.719660 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-03 15:36:25.719711 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-03 15:36:25.719724 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-03 15:36:25.719731 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-03 15:36:25.719738 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-03 15:36:25.719751 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:25.719759 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:25.719766 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:25.719773 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:25.719780 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:25.719787 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:25.719795 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:25.719804 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:25.719811 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:25.719818 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:25.719825 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:25.719833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:25.719840 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:25.719848 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:25.719856 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:25.719862 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:25.719870 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:25.719876 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:25.719884 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:25.719891 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:25.719898 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:25.719905 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:25.719912 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:25.719919 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:25.719927 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:25.719938 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:25.719945 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:25.719952 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:25.719959 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:25.719971 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:25.719988 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-03 15:36:25.719995 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-03 15:36:25.720001 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-03 15:36:25.720007 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-03 15:36:25.720017 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-03 15:36:25.720023 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-03 15:36:25.720029 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-03 15:36:25.720035 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-03 15:36:25.720042 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-03 15:36:25.720048 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-03 15:36:25.720054 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-03 15:36:25.720060 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-03 15:36:25.720066 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-03 15:36:25.720072 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-03 15:36:25.720078 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-03 15:36:25.720083 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-03 15:36:25.720089 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-03 15:36:25.720095 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-03 15:36:25.720101 | orchestrator | 2025-06-03 15:36:25.720107 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:25.720113 | orchestrator | Tuesday 03 June 2025 15:33:49 +0000 (0:00:20.515) 0:00:36.465 ********** 2025-06-03 15:36:25.720119 | orchestrator | 2025-06-03 15:36:25.720125 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:25.720131 | orchestrator | Tuesday 03 June 2025 15:33:49 +0000 (0:00:00.082) 0:00:36.548 ********** 2025-06-03 15:36:25.720137 | orchestrator | 2025-06-03 15:36:25.720149 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:25.720156 | orchestrator | Tuesday 03 June 2025 15:33:49 +0000 (0:00:00.080) 0:00:36.628 ********** 2025-06-03 15:36:25.720162 | orchestrator | 2025-06-03 15:36:25.720168 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:25.720174 | orchestrator | Tuesday 03 June 2025 15:33:50 +0000 (0:00:00.080) 0:00:36.709 ********** 2025-06-03 15:36:25.720181 | orchestrator | 2025-06-03 15:36:25.720188 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:25.720195 | orchestrator | Tuesday 03 June 2025 15:33:50 +0000 (0:00:00.092) 0:00:36.801 ********** 2025-06-03 15:36:25.720207 | orchestrator | 2025-06-03 15:36:25.720214 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:25.720221 | orchestrator | Tuesday 03 June 2025 15:33:50 +0000 (0:00:00.122) 0:00:36.924 ********** 2025-06-03 15:36:25.720229 | orchestrator | 2025-06-03 15:36:25.720236 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-03 15:36:25.720244 | orchestrator | Tuesday 03 June 2025 15:33:50 +0000 (0:00:00.088) 0:00:37.013 ********** 2025-06-03 15:36:25.720251 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.720265 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.720278 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.720285 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:36:25.720293 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:36:25.720304 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:36:25.720312 | orchestrator | 2025-06-03 15:36:25.720319 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-03 15:36:25.720326 | orchestrator | Tuesday 03 June 2025 15:33:53 +0000 (0:00:03.347) 0:00:40.360 ********** 2025-06-03 15:36:25.720334 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:25.720341 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:36:25.720349 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:25.720356 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:36:25.720363 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:25.720371 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:36:25.720378 | orchestrator | 2025-06-03 15:36:25.720386 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-03 15:36:25.720393 | orchestrator | 2025-06-03 15:36:25.720400 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-03 15:36:25.720408 | orchestrator | Tuesday 03 June 2025 15:35:00 +0000 (0:01:06.489) 0:01:46.850 ********** 2025-06-03 15:36:25.720415 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:36:25.720423 | orchestrator | 2025-06-03 15:36:25.720430 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-03 15:36:25.720438 | orchestrator | Tuesday 03 June 2025 15:35:00 +0000 (0:00:00.550) 0:01:47.400 ********** 2025-06-03 15:36:25.720445 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:36:25.720453 | orchestrator | 2025-06-03 15:36:25.720465 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-03 15:36:25.720472 | orchestrator | Tuesday 03 June 2025 15:35:01 +0000 (0:00:00.695) 0:01:48.096 ********** 2025-06-03 15:36:25.720479 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.720487 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.720494 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.720502 | orchestrator | 2025-06-03 15:36:25.720509 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-03 15:36:25.720516 | orchestrator | Tuesday 03 June 2025 15:35:02 +0000 (0:00:00.821) 0:01:48.918 ********** 2025-06-03 15:36:25.720524 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.720531 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.720538 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.720546 | orchestrator | 2025-06-03 15:36:25.720553 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-03 15:36:25.720560 | orchestrator | Tuesday 03 June 2025 15:35:02 +0000 (0:00:00.359) 0:01:49.277 ********** 2025-06-03 15:36:25.720567 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.720574 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.720582 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.720589 | orchestrator | 2025-06-03 15:36:25.720596 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-03 15:36:25.720604 | orchestrator | Tuesday 03 June 2025 15:35:02 +0000 (0:00:00.320) 0:01:49.598 ********** 2025-06-03 15:36:25.720611 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.720623 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.720631 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.720638 | orchestrator | 2025-06-03 15:36:25.720645 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-03 15:36:25.720652 | orchestrator | Tuesday 03 June 2025 15:35:03 +0000 (0:00:00.535) 0:01:50.133 ********** 2025-06-03 15:36:25.720659 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.720686 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.720694 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.720700 | orchestrator | 2025-06-03 15:36:25.720707 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-03 15:36:25.720714 | orchestrator | Tuesday 03 June 2025 15:35:03 +0000 (0:00:00.361) 0:01:50.495 ********** 2025-06-03 15:36:25.720721 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.720728 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.720734 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.720741 | orchestrator | 2025-06-03 15:36:25.720749 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-03 15:36:25.720755 | orchestrator | Tuesday 03 June 2025 15:35:04 +0000 (0:00:00.366) 0:01:50.862 ********** 2025-06-03 15:36:25.720763 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.720770 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.720777 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.720784 | orchestrator | 2025-06-03 15:36:25.720791 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-03 15:36:25.720798 | orchestrator | Tuesday 03 June 2025 15:35:04 +0000 (0:00:00.311) 0:01:51.173 ********** 2025-06-03 15:36:25.720805 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.720811 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.720818 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.720825 | orchestrator | 2025-06-03 15:36:25.720832 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-03 15:36:25.720839 | orchestrator | Tuesday 03 June 2025 15:35:05 +0000 (0:00:00.483) 0:01:51.657 ********** 2025-06-03 15:36:25.720846 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.720853 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.720860 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.720867 | orchestrator | 2025-06-03 15:36:25.720874 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-03 15:36:25.720881 | orchestrator | Tuesday 03 June 2025 15:35:05 +0000 (0:00:00.296) 0:01:51.954 ********** 2025-06-03 15:36:25.720888 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.720895 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.720901 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.720908 | orchestrator | 2025-06-03 15:36:25.720915 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-03 15:36:25.720922 | orchestrator | Tuesday 03 June 2025 15:35:05 +0000 (0:00:00.299) 0:01:52.254 ********** 2025-06-03 15:36:25.720929 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.720936 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.720943 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.720950 | orchestrator | 2025-06-03 15:36:25.720956 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-03 15:36:25.720968 | orchestrator | Tuesday 03 June 2025 15:35:05 +0000 (0:00:00.285) 0:01:52.539 ********** 2025-06-03 15:36:25.720981 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.720997 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721004 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721011 | orchestrator | 2025-06-03 15:36:25.721018 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-03 15:36:25.721024 | orchestrator | Tuesday 03 June 2025 15:35:06 +0000 (0:00:00.513) 0:01:53.053 ********** 2025-06-03 15:36:25.721030 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721035 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721047 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721054 | orchestrator | 2025-06-03 15:36:25.721061 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-03 15:36:25.721068 | orchestrator | Tuesday 03 June 2025 15:35:06 +0000 (0:00:00.305) 0:01:53.359 ********** 2025-06-03 15:36:25.721074 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721081 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721087 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721093 | orchestrator | 2025-06-03 15:36:25.721100 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-03 15:36:25.721107 | orchestrator | Tuesday 03 June 2025 15:35:07 +0000 (0:00:00.335) 0:01:53.694 ********** 2025-06-03 15:36:25.721114 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721121 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721127 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721134 | orchestrator | 2025-06-03 15:36:25.721146 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-03 15:36:25.721153 | orchestrator | Tuesday 03 June 2025 15:35:07 +0000 (0:00:00.353) 0:01:54.048 ********** 2025-06-03 15:36:25.721160 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721167 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721174 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721181 | orchestrator | 2025-06-03 15:36:25.721188 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-03 15:36:25.721195 | orchestrator | Tuesday 03 June 2025 15:35:07 +0000 (0:00:00.529) 0:01:54.578 ********** 2025-06-03 15:36:25.721202 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721209 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721216 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721223 | orchestrator | 2025-06-03 15:36:25.721230 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-03 15:36:25.721237 | orchestrator | Tuesday 03 June 2025 15:35:08 +0000 (0:00:00.333) 0:01:54.912 ********** 2025-06-03 15:36:25.721244 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:36:25.721251 | orchestrator | 2025-06-03 15:36:25.721258 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-03 15:36:25.721264 | orchestrator | Tuesday 03 June 2025 15:35:08 +0000 (0:00:00.541) 0:01:55.453 ********** 2025-06-03 15:36:25.721271 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.721278 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.721285 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.721292 | orchestrator | 2025-06-03 15:36:25.721299 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-03 15:36:25.721306 | orchestrator | Tuesday 03 June 2025 15:35:09 +0000 (0:00:00.872) 0:01:56.325 ********** 2025-06-03 15:36:25.721313 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.721320 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.721328 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.721334 | orchestrator | 2025-06-03 15:36:25.721341 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-03 15:36:25.721348 | orchestrator | Tuesday 03 June 2025 15:35:10 +0000 (0:00:00.479) 0:01:56.805 ********** 2025-06-03 15:36:25.721355 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721362 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721369 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721376 | orchestrator | 2025-06-03 15:36:25.721383 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-03 15:36:25.721390 | orchestrator | Tuesday 03 June 2025 15:35:10 +0000 (0:00:00.367) 0:01:57.172 ********** 2025-06-03 15:36:25.721396 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721403 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721410 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721417 | orchestrator | 2025-06-03 15:36:25.721429 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-03 15:36:25.721436 | orchestrator | Tuesday 03 June 2025 15:35:10 +0000 (0:00:00.359) 0:01:57.531 ********** 2025-06-03 15:36:25.721443 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721450 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721457 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721464 | orchestrator | 2025-06-03 15:36:25.721471 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-03 15:36:25.721478 | orchestrator | Tuesday 03 June 2025 15:35:11 +0000 (0:00:00.504) 0:01:58.036 ********** 2025-06-03 15:36:25.721485 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721492 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721499 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721506 | orchestrator | 2025-06-03 15:36:25.721513 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-03 15:36:25.721520 | orchestrator | Tuesday 03 June 2025 15:35:11 +0000 (0:00:00.361) 0:01:58.397 ********** 2025-06-03 15:36:25.721527 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721533 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721540 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721547 | orchestrator | 2025-06-03 15:36:25.721553 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-03 15:36:25.721561 | orchestrator | Tuesday 03 June 2025 15:35:12 +0000 (0:00:00.309) 0:01:58.707 ********** 2025-06-03 15:36:25.721567 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.721574 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.721581 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.721588 | orchestrator | 2025-06-03 15:36:25.721595 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-03 15:36:25.721601 | orchestrator | Tuesday 03 June 2025 15:35:12 +0000 (0:00:00.330) 0:01:59.038 ********** 2025-06-03 15:36:25.721610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.721618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.721954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.721979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.721990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722164 | orchestrator | 2025-06-03 15:36:25.722172 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-03 15:36:25.722180 | orchestrator | Tuesday 03 June 2025 15:35:13 +0000 (0:00:01.582) 0:02:00.620 ********** 2025-06-03 15:36:25.722191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722277 | orchestrator | 2025-06-03 15:36:25.722284 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-03 15:36:25.722291 | orchestrator | Tuesday 03 June 2025 15:35:17 +0000 (0:00:03.739) 0:02:04.360 ********** 2025-06-03 15:36:25.722298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.722382 | orchestrator | 2025-06-03 15:36:25.722389 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:25.722396 | orchestrator | Tuesday 03 June 2025 15:35:19 +0000 (0:00:02.171) 0:02:06.532 ********** 2025-06-03 15:36:25.722404 | orchestrator | 2025-06-03 15:36:25.722412 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:25.722419 | orchestrator | Tuesday 03 June 2025 15:35:19 +0000 (0:00:00.079) 0:02:06.611 ********** 2025-06-03 15:36:25.722426 | orchestrator | 2025-06-03 15:36:25.722434 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:25.722441 | orchestrator | Tuesday 03 June 2025 15:35:20 +0000 (0:00:00.068) 0:02:06.680 ********** 2025-06-03 15:36:25.722449 | orchestrator | 2025-06-03 15:36:25.722456 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-03 15:36:25.722463 | orchestrator | Tuesday 03 June 2025 15:35:20 +0000 (0:00:00.070) 0:02:06.750 ********** 2025-06-03 15:36:25.722470 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:25.722477 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:25.722484 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:25.722491 | orchestrator | 2025-06-03 15:36:25.722502 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-03 15:36:25.722509 | orchestrator | Tuesday 03 June 2025 15:35:27 +0000 (0:00:06.893) 0:02:13.643 ********** 2025-06-03 15:36:25.722517 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:25.722524 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:25.722530 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:25.722538 | orchestrator | 2025-06-03 15:36:25.722546 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-03 15:36:25.722554 | orchestrator | Tuesday 03 June 2025 15:35:34 +0000 (0:00:07.864) 0:02:21.508 ********** 2025-06-03 15:36:25.722563 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:25.722571 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:25.722585 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:25.722593 | orchestrator | 2025-06-03 15:36:25.722602 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-03 15:36:25.722610 | orchestrator | Tuesday 03 June 2025 15:35:42 +0000 (0:00:08.037) 0:02:29.545 ********** 2025-06-03 15:36:25.722617 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.722626 | orchestrator | 2025-06-03 15:36:25.722633 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-03 15:36:25.722642 | orchestrator | Tuesday 03 June 2025 15:35:43 +0000 (0:00:00.113) 0:02:29.659 ********** 2025-06-03 15:36:25.722649 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.722658 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.722689 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.722696 | orchestrator | 2025-06-03 15:36:25.722706 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-03 15:36:25.722712 | orchestrator | Tuesday 03 June 2025 15:35:43 +0000 (0:00:00.862) 0:02:30.521 ********** 2025-06-03 15:36:25.722719 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.722727 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.722734 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:25.722741 | orchestrator | 2025-06-03 15:36:25.722749 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-03 15:36:25.722756 | orchestrator | Tuesday 03 June 2025 15:35:44 +0000 (0:00:01.041) 0:02:31.563 ********** 2025-06-03 15:36:25.722763 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.722771 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.722779 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.722787 | orchestrator | 2025-06-03 15:36:25.722794 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-03 15:36:25.722802 | orchestrator | Tuesday 03 June 2025 15:35:45 +0000 (0:00:00.872) 0:02:32.435 ********** 2025-06-03 15:36:25.722810 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.722818 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.722826 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:25.722833 | orchestrator | 2025-06-03 15:36:25.722841 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-03 15:36:25.722849 | orchestrator | Tuesday 03 June 2025 15:35:46 +0000 (0:00:00.582) 0:02:33.018 ********** 2025-06-03 15:36:25.722857 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.722870 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.722892 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.722902 | orchestrator | 2025-06-03 15:36:25.722909 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-03 15:36:25.722917 | orchestrator | Tuesday 03 June 2025 15:35:47 +0000 (0:00:00.891) 0:02:33.910 ********** 2025-06-03 15:36:25.722925 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.722932 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.722939 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.722946 | orchestrator | 2025-06-03 15:36:25.722953 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-03 15:36:25.722960 | orchestrator | Tuesday 03 June 2025 15:35:49 +0000 (0:00:01.918) 0:02:35.828 ********** 2025-06-03 15:36:25.722967 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.722975 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.722982 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.722988 | orchestrator | 2025-06-03 15:36:25.722995 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-03 15:36:25.723002 | orchestrator | Tuesday 03 June 2025 15:35:49 +0000 (0:00:00.382) 0:02:36.210 ********** 2025-06-03 15:36:25.723010 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723023 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723029 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723039 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723046 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723053 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723066 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723073 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723080 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723087 | orchestrator | 2025-06-03 15:36:25.723094 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-03 15:36:25.723101 | orchestrator | Tuesday 03 June 2025 15:35:51 +0000 (0:00:01.481) 0:02:37.692 ********** 2025-06-03 15:36:25.723108 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723120 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723128 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723145 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723171 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723185 | orchestrator | 2025-06-03 15:36:25.723192 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-03 15:36:25.723199 | orchestrator | Tuesday 03 June 2025 15:35:55 +0000 (0:00:04.687) 0:02:42.382 ********** 2025-06-03 15:36:25.723205 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723217 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723224 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723242 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723273 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:25.723280 | orchestrator | 2025-06-03 15:36:25.723287 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:25.723293 | orchestrator | Tuesday 03 June 2025 15:35:59 +0000 (0:00:03.498) 0:02:45.880 ********** 2025-06-03 15:36:25.723300 | orchestrator | 2025-06-03 15:36:25.723307 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:25.723313 | orchestrator | Tuesday 03 June 2025 15:35:59 +0000 (0:00:00.109) 0:02:45.990 ********** 2025-06-03 15:36:25.723324 | orchestrator | 2025-06-03 15:36:25.723332 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:25.723338 | orchestrator | Tuesday 03 June 2025 15:35:59 +0000 (0:00:00.063) 0:02:46.053 ********** 2025-06-03 15:36:25.723345 | orchestrator | 2025-06-03 15:36:25.723352 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-03 15:36:25.723359 | orchestrator | Tuesday 03 June 2025 15:35:59 +0000 (0:00:00.066) 0:02:46.120 ********** 2025-06-03 15:36:25.723366 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:25.723372 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:25.723379 | orchestrator | 2025-06-03 15:36:25.723386 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-03 15:36:25.723393 | orchestrator | Tuesday 03 June 2025 15:36:05 +0000 (0:00:06.331) 0:02:52.452 ********** 2025-06-03 15:36:25.723400 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:25.723407 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:25.723414 | orchestrator | 2025-06-03 15:36:25.723421 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-03 15:36:25.723428 | orchestrator | Tuesday 03 June 2025 15:36:12 +0000 (0:00:06.297) 0:02:58.750 ********** 2025-06-03 15:36:25.723435 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:25.723442 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:25.723449 | orchestrator | 2025-06-03 15:36:25.723456 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-03 15:36:25.723463 | orchestrator | Tuesday 03 June 2025 15:36:18 +0000 (0:00:06.241) 0:03:04.991 ********** 2025-06-03 15:36:25.723469 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:25.723475 | orchestrator | 2025-06-03 15:36:25.723482 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-03 15:36:25.723489 | orchestrator | Tuesday 03 June 2025 15:36:18 +0000 (0:00:00.117) 0:03:05.108 ********** 2025-06-03 15:36:25.723496 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.723502 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.723509 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.723516 | orchestrator | 2025-06-03 15:36:25.723523 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-03 15:36:25.723530 | orchestrator | Tuesday 03 June 2025 15:36:19 +0000 (0:00:00.952) 0:03:06.061 ********** 2025-06-03 15:36:25.723537 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.723544 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.723551 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:25.723558 | orchestrator | 2025-06-03 15:36:25.723565 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-03 15:36:25.723572 | orchestrator | Tuesday 03 June 2025 15:36:20 +0000 (0:00:00.726) 0:03:06.788 ********** 2025-06-03 15:36:25.723579 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.723586 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.723593 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.723600 | orchestrator | 2025-06-03 15:36:25.723606 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-03 15:36:25.723613 | orchestrator | Tuesday 03 June 2025 15:36:20 +0000 (0:00:00.760) 0:03:07.548 ********** 2025-06-03 15:36:25.723627 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:25.723634 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:25.723640 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:25.723647 | orchestrator | 2025-06-03 15:36:25.723653 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-03 15:36:25.723661 | orchestrator | Tuesday 03 June 2025 15:36:21 +0000 (0:00:00.716) 0:03:08.264 ********** 2025-06-03 15:36:25.723687 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.723694 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.723701 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.723708 | orchestrator | 2025-06-03 15:36:25.723715 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-03 15:36:25.723727 | orchestrator | Tuesday 03 June 2025 15:36:22 +0000 (0:00:01.183) 0:03:09.448 ********** 2025-06-03 15:36:25.723734 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:25.723740 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:25.723747 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:25.723753 | orchestrator | 2025-06-03 15:36:25.723759 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:36:25.723767 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-03 15:36:25.723775 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-03 15:36:25.723787 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-03 15:36:25.723793 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:36:25.723800 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:36:25.723808 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:36:25.723815 | orchestrator | 2025-06-03 15:36:25.723822 | orchestrator | 2025-06-03 15:36:25.723829 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:36:25.723836 | orchestrator | Tuesday 03 June 2025 15:36:23 +0000 (0:00:01.074) 0:03:10.522 ********** 2025-06-03 15:36:25.723843 | orchestrator | =============================================================================== 2025-06-03 15:36:25.723849 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 66.49s 2025-06-03 15:36:25.723856 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.52s 2025-06-03 15:36:25.723863 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.28s 2025-06-03 15:36:25.723870 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.16s 2025-06-03 15:36:25.723876 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.22s 2025-06-03 15:36:25.723883 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.69s 2025-06-03 15:36:25.723890 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.74s 2025-06-03 15:36:25.723897 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.50s 2025-06-03 15:36:25.723904 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 3.35s 2025-06-03 15:36:25.723911 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.62s 2025-06-03 15:36:25.723918 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.42s 2025-06-03 15:36:25.723925 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.17s 2025-06-03 15:36:25.723933 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.93s 2025-06-03 15:36:25.723946 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.92s 2025-06-03 15:36:25.723961 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.84s 2025-06-03 15:36:25.723971 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.82s 2025-06-03 15:36:25.723978 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.71s 2025-06-03 15:36:25.723985 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.58s 2025-06-03 15:36:25.723992 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.55s 2025-06-03 15:36:25.723999 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2025-06-03 15:36:25.724010 | orchestrator | 2025-06-03 15:36:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:28.766393 | orchestrator | 2025-06-03 15:36:28 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:28.767964 | orchestrator | 2025-06-03 15:36:28 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:28.768023 | orchestrator | 2025-06-03 15:36:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:31.823528 | orchestrator | 2025-06-03 15:36:31 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:31.823655 | orchestrator | 2025-06-03 15:36:31 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:31.823715 | orchestrator | 2025-06-03 15:36:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:34.857059 | orchestrator | 2025-06-03 15:36:34 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:34.857186 | orchestrator | 2025-06-03 15:36:34 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:34.857204 | orchestrator | 2025-06-03 15:36:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:37.905683 | orchestrator | 2025-06-03 15:36:37 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:37.907046 | orchestrator | 2025-06-03 15:36:37 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:37.907249 | orchestrator | 2025-06-03 15:36:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:40.965118 | orchestrator | 2025-06-03 15:36:40 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:40.967500 | orchestrator | 2025-06-03 15:36:40 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:40.967599 | orchestrator | 2025-06-03 15:36:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:44.012103 | orchestrator | 2025-06-03 15:36:44 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:44.014937 | orchestrator | 2025-06-03 15:36:44 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:44.015007 | orchestrator | 2025-06-03 15:36:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:47.056656 | orchestrator | 2025-06-03 15:36:47 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:47.057322 | orchestrator | 2025-06-03 15:36:47 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:47.057360 | orchestrator | 2025-06-03 15:36:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:50.093466 | orchestrator | 2025-06-03 15:36:50 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:50.093952 | orchestrator | 2025-06-03 15:36:50 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:50.093984 | orchestrator | 2025-06-03 15:36:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:53.120114 | orchestrator | 2025-06-03 15:36:53 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:53.120512 | orchestrator | 2025-06-03 15:36:53 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:53.120543 | orchestrator | 2025-06-03 15:36:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:56.159377 | orchestrator | 2025-06-03 15:36:56 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:56.161510 | orchestrator | 2025-06-03 15:36:56 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:56.161565 | orchestrator | 2025-06-03 15:36:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:59.208058 | orchestrator | 2025-06-03 15:36:59 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:36:59.208294 | orchestrator | 2025-06-03 15:36:59 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:36:59.208459 | orchestrator | 2025-06-03 15:36:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:02.249808 | orchestrator | 2025-06-03 15:37:02 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:02.251130 | orchestrator | 2025-06-03 15:37:02 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:02.251211 | orchestrator | 2025-06-03 15:37:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:05.290930 | orchestrator | 2025-06-03 15:37:05 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:05.291047 | orchestrator | 2025-06-03 15:37:05 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:05.291064 | orchestrator | 2025-06-03 15:37:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:08.335809 | orchestrator | 2025-06-03 15:37:08 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:08.335885 | orchestrator | 2025-06-03 15:37:08 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:08.335907 | orchestrator | 2025-06-03 15:37:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:11.374252 | orchestrator | 2025-06-03 15:37:11 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:11.374342 | orchestrator | 2025-06-03 15:37:11 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:11.374351 | orchestrator | 2025-06-03 15:37:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:14.418270 | orchestrator | 2025-06-03 15:37:14 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:14.419359 | orchestrator | 2025-06-03 15:37:14 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:14.423330 | orchestrator | 2025-06-03 15:37:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:17.468926 | orchestrator | 2025-06-03 15:37:17 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:17.469039 | orchestrator | 2025-06-03 15:37:17 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:17.469055 | orchestrator | 2025-06-03 15:37:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:20.511494 | orchestrator | 2025-06-03 15:37:20 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:20.512083 | orchestrator | 2025-06-03 15:37:20 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:20.512136 | orchestrator | 2025-06-03 15:37:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:23.568653 | orchestrator | 2025-06-03 15:37:23 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:23.569550 | orchestrator | 2025-06-03 15:37:23 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:23.569631 | orchestrator | 2025-06-03 15:37:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:26.619506 | orchestrator | 2025-06-03 15:37:26 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:26.621576 | orchestrator | 2025-06-03 15:37:26 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:26.621702 | orchestrator | 2025-06-03 15:37:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:29.662316 | orchestrator | 2025-06-03 15:37:29 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:29.662445 | orchestrator | 2025-06-03 15:37:29 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:29.662470 | orchestrator | 2025-06-03 15:37:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:32.708968 | orchestrator | 2025-06-03 15:37:32 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:32.709196 | orchestrator | 2025-06-03 15:37:32 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:32.709261 | orchestrator | 2025-06-03 15:37:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:35.762966 | orchestrator | 2025-06-03 15:37:35 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:35.763110 | orchestrator | 2025-06-03 15:37:35 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:35.763135 | orchestrator | 2025-06-03 15:37:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:38.800578 | orchestrator | 2025-06-03 15:37:38 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:38.801105 | orchestrator | 2025-06-03 15:37:38 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:38.801142 | orchestrator | 2025-06-03 15:37:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:41.843723 | orchestrator | 2025-06-03 15:37:41 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:41.844350 | orchestrator | 2025-06-03 15:37:41 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:41.844384 | orchestrator | 2025-06-03 15:37:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:44.878445 | orchestrator | 2025-06-03 15:37:44 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:44.878560 | orchestrator | 2025-06-03 15:37:44 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:44.879461 | orchestrator | 2025-06-03 15:37:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:47.927574 | orchestrator | 2025-06-03 15:37:47 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:47.928721 | orchestrator | 2025-06-03 15:37:47 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:47.928766 | orchestrator | 2025-06-03 15:37:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:50.987149 | orchestrator | 2025-06-03 15:37:50 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:50.989180 | orchestrator | 2025-06-03 15:37:50 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:50.989252 | orchestrator | 2025-06-03 15:37:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:54.036515 | orchestrator | 2025-06-03 15:37:54 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:54.037217 | orchestrator | 2025-06-03 15:37:54 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:54.037291 | orchestrator | 2025-06-03 15:37:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:57.078151 | orchestrator | 2025-06-03 15:37:57 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:37:57.078288 | orchestrator | 2025-06-03 15:37:57 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:37:57.078317 | orchestrator | 2025-06-03 15:37:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:00.121261 | orchestrator | 2025-06-03 15:38:00 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:00.122314 | orchestrator | 2025-06-03 15:38:00 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:00.122461 | orchestrator | 2025-06-03 15:38:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:03.169638 | orchestrator | 2025-06-03 15:38:03 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:03.171338 | orchestrator | 2025-06-03 15:38:03 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:03.171398 | orchestrator | 2025-06-03 15:38:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:06.219885 | orchestrator | 2025-06-03 15:38:06 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:06.220004 | orchestrator | 2025-06-03 15:38:06 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:06.220021 | orchestrator | 2025-06-03 15:38:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:09.259522 | orchestrator | 2025-06-03 15:38:09 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:09.262435 | orchestrator | 2025-06-03 15:38:09 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:09.262895 | orchestrator | 2025-06-03 15:38:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:12.303869 | orchestrator | 2025-06-03 15:38:12 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:12.307865 | orchestrator | 2025-06-03 15:38:12 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:12.308847 | orchestrator | 2025-06-03 15:38:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:15.347740 | orchestrator | 2025-06-03 15:38:15 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:15.349607 | orchestrator | 2025-06-03 15:38:15 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:15.349722 | orchestrator | 2025-06-03 15:38:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:18.394903 | orchestrator | 2025-06-03 15:38:18 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:18.396775 | orchestrator | 2025-06-03 15:38:18 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:18.396955 | orchestrator | 2025-06-03 15:38:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:21.445883 | orchestrator | 2025-06-03 15:38:21 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:21.447912 | orchestrator | 2025-06-03 15:38:21 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:21.447991 | orchestrator | 2025-06-03 15:38:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:24.481957 | orchestrator | 2025-06-03 15:38:24 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:24.483505 | orchestrator | 2025-06-03 15:38:24 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:24.483604 | orchestrator | 2025-06-03 15:38:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:27.530938 | orchestrator | 2025-06-03 15:38:27 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:27.531195 | orchestrator | 2025-06-03 15:38:27 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:27.531499 | orchestrator | 2025-06-03 15:38:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:30.574070 | orchestrator | 2025-06-03 15:38:30 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:30.574200 | orchestrator | 2025-06-03 15:38:30 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state STARTED 2025-06-03 15:38:30.574223 | orchestrator | 2025-06-03 15:38:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:33.630221 | orchestrator | 2025-06-03 15:38:33 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:38:33.631972 | orchestrator | 2025-06-03 15:38:33 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:33.644928 | orchestrator | 2025-06-03 15:38:33 | INFO  | Task 51a1345b-5ce2-4ef9-92d9-ffa6a6ab454c is in state SUCCESS 2025-06-03 15:38:33.647252 | orchestrator | 2025-06-03 15:38:33.647305 | orchestrator | 2025-06-03 15:38:33.647319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:38:33.647365 | orchestrator | 2025-06-03 15:38:33.647380 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:38:33.647393 | orchestrator | Tuesday 03 June 2025 15:32:02 +0000 (0:00:00.653) 0:00:00.653 ********** 2025-06-03 15:38:33.647406 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.647454 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.647467 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.647478 | orchestrator | 2025-06-03 15:38:33.647489 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:38:33.647501 | orchestrator | Tuesday 03 June 2025 15:32:03 +0000 (0:00:00.732) 0:00:01.385 ********** 2025-06-03 15:38:33.647558 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-03 15:38:33.647571 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-03 15:38:33.647582 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-03 15:38:33.647593 | orchestrator | 2025-06-03 15:38:33.647604 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-03 15:38:33.647615 | orchestrator | 2025-06-03 15:38:33.647626 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-03 15:38:33.647661 | orchestrator | Tuesday 03 June 2025 15:32:05 +0000 (0:00:01.398) 0:00:02.783 ********** 2025-06-03 15:38:33.647673 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.647684 | orchestrator | 2025-06-03 15:38:33.647696 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-03 15:38:33.647707 | orchestrator | Tuesday 03 June 2025 15:32:06 +0000 (0:00:01.774) 0:00:04.558 ********** 2025-06-03 15:38:33.647718 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.647729 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.647740 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.647751 | orchestrator | 2025-06-03 15:38:33.647762 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-03 15:38:33.647773 | orchestrator | Tuesday 03 June 2025 15:32:08 +0000 (0:00:01.489) 0:00:06.048 ********** 2025-06-03 15:38:33.647784 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.647795 | orchestrator | 2025-06-03 15:38:33.647806 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-03 15:38:33.647843 | orchestrator | Tuesday 03 June 2025 15:32:11 +0000 (0:00:02.839) 0:00:08.887 ********** 2025-06-03 15:38:33.647856 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.647868 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.647880 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.647892 | orchestrator | 2025-06-03 15:38:33.647905 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-03 15:38:33.647918 | orchestrator | Tuesday 03 June 2025 15:32:12 +0000 (0:00:00.826) 0:00:09.714 ********** 2025-06-03 15:38:33.647930 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:33.647943 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:33.647955 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:33.647967 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:33.647979 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:33.647992 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:33.648005 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-03 15:38:33.648033 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-03 15:38:33.648098 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-03 15:38:33.648112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-03 15:38:33.648125 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-03 15:38:33.648139 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-03 15:38:33.648151 | orchestrator | 2025-06-03 15:38:33.648207 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-03 15:38:33.648221 | orchestrator | Tuesday 03 June 2025 15:32:15 +0000 (0:00:03.950) 0:00:13.664 ********** 2025-06-03 15:38:33.648234 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-03 15:38:33.648246 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-03 15:38:33.648257 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-03 15:38:33.648268 | orchestrator | 2025-06-03 15:38:33.648279 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-03 15:38:33.648289 | orchestrator | Tuesday 03 June 2025 15:32:16 +0000 (0:00:00.814) 0:00:14.479 ********** 2025-06-03 15:38:33.648300 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-03 15:38:33.648311 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-03 15:38:33.648322 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-03 15:38:33.648333 | orchestrator | 2025-06-03 15:38:33.648344 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-03 15:38:33.648354 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:02.124) 0:00:16.603 ********** 2025-06-03 15:38:33.648366 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-03 15:38:33.648377 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.648402 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-03 15:38:33.648414 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.648425 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-03 15:38:33.648435 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.648446 | orchestrator | 2025-06-03 15:38:33.648457 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-03 15:38:33.648468 | orchestrator | Tuesday 03 June 2025 15:32:20 +0000 (0:00:01.652) 0:00:18.255 ********** 2025-06-03 15:38:33.648505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.648525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.648537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.648555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.648568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.648587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.648599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.648619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.648630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.648737 | orchestrator | 2025-06-03 15:38:33.648790 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-03 15:38:33.648802 | orchestrator | Tuesday 03 June 2025 15:32:23 +0000 (0:00:02.995) 0:00:21.251 ********** 2025-06-03 15:38:33.648813 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.648824 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.648835 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.648845 | orchestrator | 2025-06-03 15:38:33.648856 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-03 15:38:33.648867 | orchestrator | Tuesday 03 June 2025 15:32:25 +0000 (0:00:01.495) 0:00:22.746 ********** 2025-06-03 15:38:33.648878 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-03 15:38:33.648889 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-03 15:38:33.648899 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-03 15:38:33.648910 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-03 15:38:33.648921 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-03 15:38:33.648932 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-03 15:38:33.648942 | orchestrator | 2025-06-03 15:38:33.648953 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-03 15:38:33.648964 | orchestrator | Tuesday 03 June 2025 15:32:28 +0000 (0:00:03.309) 0:00:26.056 ********** 2025-06-03 15:38:33.648975 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.648986 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.648996 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.649007 | orchestrator | 2025-06-03 15:38:33.649024 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-03 15:38:33.649035 | orchestrator | Tuesday 03 June 2025 15:32:29 +0000 (0:00:01.387) 0:00:27.444 ********** 2025-06-03 15:38:33.649046 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.649057 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.649068 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.649078 | orchestrator | 2025-06-03 15:38:33.649089 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-03 15:38:33.649100 | orchestrator | Tuesday 03 June 2025 15:32:31 +0000 (0:00:02.218) 0:00:29.662 ********** 2025-06-03 15:38:33.649111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.649146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.649159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.649171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:33.649210 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.649223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.649240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.649252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.649328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:33.649343 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.649355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.649366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.649378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.649389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:33.649401 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.649412 | orchestrator | 2025-06-03 15:38:33.649423 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-03 15:38:33.649487 | orchestrator | Tuesday 03 June 2025 15:32:33 +0000 (0:00:01.457) 0:00:31.120 ********** 2025-06-03 15:38:33.649510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.649528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.649541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.649553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.649564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.649576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:33.649588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.649607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.649723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:33.649746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.649758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.649769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90', '__omit_place_holder__095b187dd7b1261b0be0aded961e014ef90a6e90'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:33.649883 | orchestrator | 2025-06-03 15:38:33.649897 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-03 15:38:33.649908 | orchestrator | Tuesday 03 June 2025 15:32:36 +0000 (0:00:03.136) 0:00:34.257 ********** 2025-06-03 15:38:33.649934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.649947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.649969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.649981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.649993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.650004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.650094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.650115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.650127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.650138 | orchestrator | 2025-06-03 15:38:33.650149 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-03 15:38:33.650160 | orchestrator | Tuesday 03 June 2025 15:32:40 +0000 (0:00:04.430) 0:00:38.687 ********** 2025-06-03 15:38:33.650172 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-03 15:38:33.650824 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-03 15:38:33.650886 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-03 15:38:33.650894 | orchestrator | 2025-06-03 15:38:33.650900 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-03 15:38:33.650906 | orchestrator | Tuesday 03 June 2025 15:32:42 +0000 (0:00:01.961) 0:00:40.649 ********** 2025-06-03 15:38:33.650910 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-03 15:38:33.650915 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-03 15:38:33.650920 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-03 15:38:33.650925 | orchestrator | 2025-06-03 15:38:33.650929 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-03 15:38:33.650934 | orchestrator | Tuesday 03 June 2025 15:32:46 +0000 (0:00:03.571) 0:00:44.221 ********** 2025-06-03 15:38:33.650939 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.650944 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.650948 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.650953 | orchestrator | 2025-06-03 15:38:33.650958 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-03 15:38:33.650962 | orchestrator | Tuesday 03 June 2025 15:32:47 +0000 (0:00:00.579) 0:00:44.800 ********** 2025-06-03 15:38:33.650967 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-03 15:38:33.650973 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-03 15:38:33.650978 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-03 15:38:33.650997 | orchestrator | 2025-06-03 15:38:33.651002 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-03 15:38:33.651007 | orchestrator | Tuesday 03 June 2025 15:32:50 +0000 (0:00:02.937) 0:00:47.737 ********** 2025-06-03 15:38:33.651011 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-03 15:38:33.651016 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-03 15:38:33.651021 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-03 15:38:33.651026 | orchestrator | 2025-06-03 15:38:33.651030 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-03 15:38:33.651035 | orchestrator | Tuesday 03 June 2025 15:32:52 +0000 (0:00:02.546) 0:00:50.283 ********** 2025-06-03 15:38:33.651040 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-03 15:38:33.651045 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-03 15:38:33.651049 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-03 15:38:33.651054 | orchestrator | 2025-06-03 15:38:33.651058 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-03 15:38:33.651063 | orchestrator | Tuesday 03 June 2025 15:32:54 +0000 (0:00:01.513) 0:00:51.797 ********** 2025-06-03 15:38:33.651067 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-03 15:38:33.651072 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-03 15:38:33.651077 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-03 15:38:33.651081 | orchestrator | 2025-06-03 15:38:33.651092 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-03 15:38:33.651097 | orchestrator | Tuesday 03 June 2025 15:32:55 +0000 (0:00:01.700) 0:00:53.497 ********** 2025-06-03 15:38:33.651101 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.651106 | orchestrator | 2025-06-03 15:38:33.651110 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-03 15:38:33.651115 | orchestrator | Tuesday 03 June 2025 15:32:56 +0000 (0:00:00.609) 0:00:54.107 ********** 2025-06-03 15:38:33.651121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.651141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.651146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.651155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.651161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.651169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.651174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.651180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.651189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.651197 | orchestrator | 2025-06-03 15:38:33.651202 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-03 15:38:33.651207 | orchestrator | Tuesday 03 June 2025 15:32:59 +0000 (0:00:03.407) 0:00:57.514 ********** 2025-06-03 15:38:33.651212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651227 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.651236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651272 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.651281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651305 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.651312 | orchestrator | 2025-06-03 15:38:33.651320 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-03 15:38:33.651326 | orchestrator | Tuesday 03 June 2025 15:33:00 +0000 (0:00:00.548) 0:00:58.063 ********** 2025-06-03 15:38:33.651334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651357 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.651362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651377 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.651382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651405 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.651410 | orchestrator | 2025-06-03 15:38:33.651415 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-03 15:38:33.651420 | orchestrator | Tuesday 03 June 2025 15:33:01 +0000 (0:00:01.189) 0:00:59.252 ********** 2025-06-03 15:38:33.651429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651446 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.651451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651473 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.651482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651498 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.651504 | orchestrator | 2025-06-03 15:38:33.651509 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-03 15:38:33.651514 | orchestrator | Tuesday 03 June 2025 15:33:02 +0000 (0:00:01.329) 0:01:00.581 ********** 2025-06-03 15:38:33.651519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651545 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.651551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651575 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.651580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651601 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.651606 | orchestrator | 2025-06-03 15:38:33.651611 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-03 15:38:33.651615 | orchestrator | Tuesday 03 June 2025 15:33:03 +0000 (0:00:00.759) 0:01:01.342 ********** 2025-06-03 15:38:33.651620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651671 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.651676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651698 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.651703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651720 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.651725 | orchestrator | 2025-06-03 15:38:33.651730 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-03 15:38:33.651734 | orchestrator | Tuesday 03 June 2025 15:33:04 +0000 (0:00:01.186) 0:01:02.528 ********** 2025-06-03 15:38:33.651739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651757 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.651765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651784 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.651789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651807 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.651812 | orchestrator | 2025-06-03 15:38:33.651817 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-03 15:38:33.651821 | orchestrator | Tuesday 03 June 2025 15:33:05 +0000 (0:00:00.618) 0:01:03.146 ********** 2025-06-03 15:38:33.651829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651854 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.651862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651891 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.651896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651915 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.651919 | orchestrator | 2025-06-03 15:38:33.651924 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-03 15:38:33.651932 | orchestrator | Tuesday 03 June 2025 15:33:06 +0000 (0:00:00.570) 0:01:03.717 ********** 2025-06-03 15:38:33.651937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651956 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.651961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.651978 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.651986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:33.651991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:33.651995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:33.652004 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.652009 | orchestrator | 2025-06-03 15:38:33.652013 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-03 15:38:33.652018 | orchestrator | Tuesday 03 June 2025 15:33:07 +0000 (0:00:01.115) 0:01:04.832 ********** 2025-06-03 15:38:33.652023 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-03 15:38:33.652028 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-03 15:38:33.652032 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-03 15:38:33.652037 | orchestrator | 2025-06-03 15:38:33.652041 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-03 15:38:33.652046 | orchestrator | Tuesday 03 June 2025 15:33:08 +0000 (0:00:01.415) 0:01:06.248 ********** 2025-06-03 15:38:33.652051 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-03 15:38:33.652055 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-03 15:38:33.652060 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-03 15:38:33.652065 | orchestrator | 2025-06-03 15:38:33.652070 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-03 15:38:33.652074 | orchestrator | Tuesday 03 June 2025 15:33:09 +0000 (0:00:01.373) 0:01:07.621 ********** 2025-06-03 15:38:33.652083 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:38:33.652088 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:38:33.652093 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:38:33.652101 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:38:33.652108 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.652116 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:38:33.652123 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.652132 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:38:33.652144 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.652152 | orchestrator | 2025-06-03 15:38:33.652158 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-03 15:38:33.652165 | orchestrator | Tuesday 03 June 2025 15:33:10 +0000 (0:00:01.032) 0:01:08.654 ********** 2025-06-03 15:38:33.652178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.652186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.652200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:33.652207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.652214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.652227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:33.652234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.652247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.652258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:33.652264 | orchestrator | 2025-06-03 15:38:33.652272 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-03 15:38:33.652279 | orchestrator | Tuesday 03 June 2025 15:33:13 +0000 (0:00:02.827) 0:01:11.481 ********** 2025-06-03 15:38:33.652286 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.652293 | orchestrator | 2025-06-03 15:38:33.652301 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-03 15:38:33.652310 | orchestrator | Tuesday 03 June 2025 15:33:14 +0000 (0:00:00.738) 0:01:12.220 ********** 2025-06-03 15:38:33.652320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-03 15:38:33.652329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.652339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-03 15:38:33.652513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.652518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-03 15:38:33.652594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.652609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652620 | orchestrator | 2025-06-03 15:38:33.652624 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-03 15:38:33.652630 | orchestrator | Tuesday 03 June 2025 15:33:19 +0000 (0:00:04.636) 0:01:16.856 ********** 2025-06-03 15:38:33.652664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-03 15:38:33.652670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.652679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652693 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.652702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-03 15:38:33.652707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.652712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652723 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.652731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-03 15:38:33.652740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.652747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.652757 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.652762 | orchestrator | 2025-06-03 15:38:33.652767 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-03 15:38:33.652771 | orchestrator | Tuesday 03 June 2025 15:33:20 +0000 (0:00:01.697) 0:01:18.554 ********** 2025-06-03 15:38:33.652777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:33.652783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:33.652788 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.652793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:33.652798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:33.652805 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.652813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:33.652824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:33.652834 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.652843 | orchestrator | 2025-06-03 15:38:33.652850 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-03 15:38:33.652864 | orchestrator | Tuesday 03 June 2025 15:33:23 +0000 (0:00:02.230) 0:01:20.784 ********** 2025-06-03 15:38:33.652872 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.652880 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.652892 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.652899 | orchestrator | 2025-06-03 15:38:33.652906 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-03 15:38:33.652914 | orchestrator | Tuesday 03 June 2025 15:33:24 +0000 (0:00:01.441) 0:01:22.225 ********** 2025-06-03 15:38:33.652922 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.652930 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.652938 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.652946 | orchestrator | 2025-06-03 15:38:33.652955 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-03 15:38:33.652960 | orchestrator | Tuesday 03 June 2025 15:33:26 +0000 (0:00:02.106) 0:01:24.332 ********** 2025-06-03 15:38:33.652964 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.652969 | orchestrator | 2025-06-03 15:38:33.652974 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-03 15:38:33.652978 | orchestrator | Tuesday 03 June 2025 15:33:27 +0000 (0:00:00.745) 0:01:25.077 ********** 2025-06-03 15:38:33.652989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.652995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.653017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.653035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653049 | orchestrator | 2025-06-03 15:38:33.653054 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-03 15:38:33.653059 | orchestrator | Tuesday 03 June 2025 15:33:32 +0000 (0:00:05.387) 0:01:30.464 ********** 2025-06-03 15:38:33.653067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.653073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.653080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653107 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.653114 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.653131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.653146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653162 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.653169 | orchestrator | 2025-06-03 15:38:33.653176 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-03 15:38:33.653184 | orchestrator | Tuesday 03 June 2025 15:33:33 +0000 (0:00:00.684) 0:01:31.149 ********** 2025-06-03 15:38:33.653192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:33.653201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:33.653215 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.653223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:33.653231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:33.653236 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.653242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:33.653247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:33.653253 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.653258 | orchestrator | 2025-06-03 15:38:33.653263 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-03 15:38:33.653269 | orchestrator | Tuesday 03 June 2025 15:33:34 +0000 (0:00:00.747) 0:01:31.896 ********** 2025-06-03 15:38:33.653274 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.653280 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.653286 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.653292 | orchestrator | 2025-06-03 15:38:33.653300 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-03 15:38:33.653307 | orchestrator | Tuesday 03 June 2025 15:33:35 +0000 (0:00:01.757) 0:01:33.653 ********** 2025-06-03 15:38:33.653314 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.653324 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.653332 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.653341 | orchestrator | 2025-06-03 15:38:33.653349 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-03 15:38:33.653357 | orchestrator | Tuesday 03 June 2025 15:33:38 +0000 (0:00:02.230) 0:01:35.884 ********** 2025-06-03 15:38:33.653364 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.653368 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.653373 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.653378 | orchestrator | 2025-06-03 15:38:33.653382 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-03 15:38:33.653387 | orchestrator | Tuesday 03 June 2025 15:33:38 +0000 (0:00:00.319) 0:01:36.203 ********** 2025-06-03 15:38:33.653391 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.653396 | orchestrator | 2025-06-03 15:38:33.653401 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-03 15:38:33.653405 | orchestrator | Tuesday 03 June 2025 15:33:39 +0000 (0:00:00.819) 0:01:37.022 ********** 2025-06-03 15:38:33.653417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-03 15:38:33.653422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-03 15:38:33.653432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-03 15:38:33.653437 | orchestrator | 2025-06-03 15:38:33.653442 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-03 15:38:33.653446 | orchestrator | Tuesday 03 June 2025 15:33:43 +0000 (0:00:03.786) 0:01:40.808 ********** 2025-06-03 15:38:33.653455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-03 15:38:33.653460 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.653466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-03 15:38:33.653471 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.653479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-03 15:38:33.653488 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.653493 | orchestrator | 2025-06-03 15:38:33.653497 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-03 15:38:33.653502 | orchestrator | Tuesday 03 June 2025 15:33:45 +0000 (0:00:02.103) 0:01:42.912 ********** 2025-06-03 15:38:33.653508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:33.653514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:33.653520 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.653526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:33.653530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:33.653535 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.653543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:33.653548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:33.653553 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.653557 | orchestrator | 2025-06-03 15:38:33.653562 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-03 15:38:33.653567 | orchestrator | Tuesday 03 June 2025 15:33:47 +0000 (0:00:02.155) 0:01:45.067 ********** 2025-06-03 15:38:33.653571 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.653579 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.653583 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.653588 | orchestrator | 2025-06-03 15:38:33.653593 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-03 15:38:33.653597 | orchestrator | Tuesday 03 June 2025 15:33:48 +0000 (0:00:00.816) 0:01:45.884 ********** 2025-06-03 15:38:33.653602 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.653607 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.653612 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.653617 | orchestrator | 2025-06-03 15:38:33.653621 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-03 15:38:33.653629 | orchestrator | Tuesday 03 June 2025 15:33:49 +0000 (0:00:01.131) 0:01:47.015 ********** 2025-06-03 15:38:33.653735 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.653751 | orchestrator | 2025-06-03 15:38:33.653756 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-03 15:38:33.653761 | orchestrator | Tuesday 03 June 2025 15:33:50 +0000 (0:00:01.133) 0:01:48.149 ********** 2025-06-03 15:38:33.653766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.653772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.653813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.653840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653870 | orchestrator | 2025-06-03 15:38:33.653874 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-03 15:38:33.653880 | orchestrator | Tuesday 03 June 2025 15:33:56 +0000 (0:00:06.301) 0:01:54.450 ********** 2025-06-03 15:38:33.653891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.653899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653921 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.653926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.653931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653956 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.653964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.653970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.653991 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.653996 | orchestrator | 2025-06-03 15:38:33.654001 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-03 15:38:33.654005 | orchestrator | Tuesday 03 June 2025 15:33:58 +0000 (0:00:01.517) 0:01:55.967 ********** 2025-06-03 15:38:33.654010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:33.654039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:33.654047 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.654052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:33.654056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:33.654061 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.654070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:33.654076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:33.654081 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.654086 | orchestrator | 2025-06-03 15:38:33.654090 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-03 15:38:33.654095 | orchestrator | Tuesday 03 June 2025 15:33:59 +0000 (0:00:01.102) 0:01:57.070 ********** 2025-06-03 15:38:33.654100 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.654104 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.654109 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.654114 | orchestrator | 2025-06-03 15:38:33.654118 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-03 15:38:33.654123 | orchestrator | Tuesday 03 June 2025 15:34:00 +0000 (0:00:01.327) 0:01:58.398 ********** 2025-06-03 15:38:33.654128 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.654132 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.654137 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.654142 | orchestrator | 2025-06-03 15:38:33.654146 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-03 15:38:33.654151 | orchestrator | Tuesday 03 June 2025 15:34:02 +0000 (0:00:02.299) 0:02:00.697 ********** 2025-06-03 15:38:33.654155 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.654160 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.654165 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.654169 | orchestrator | 2025-06-03 15:38:33.654174 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-03 15:38:33.654179 | orchestrator | Tuesday 03 June 2025 15:34:03 +0000 (0:00:00.708) 0:02:01.406 ********** 2025-06-03 15:38:33.654184 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.654193 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.654198 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.654203 | orchestrator | 2025-06-03 15:38:33.654208 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-03 15:38:33.654213 | orchestrator | Tuesday 03 June 2025 15:34:04 +0000 (0:00:00.440) 0:02:01.846 ********** 2025-06-03 15:38:33.654217 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.654222 | orchestrator | 2025-06-03 15:38:33.654226 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-03 15:38:33.654231 | orchestrator | Tuesday 03 June 2025 15:34:04 +0000 (0:00:00.788) 0:02:02.635 ********** 2025-06-03 15:38:33.654237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:38:33.654246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:33.654251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:38:33.654280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:33.654302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:38:33.654349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:33.654354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654384 | orchestrator | 2025-06-03 15:38:33.654390 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-03 15:38:33.654394 | orchestrator | Tuesday 03 June 2025 15:34:09 +0000 (0:00:04.621) 0:02:07.256 ********** 2025-06-03 15:38:33.654403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:38:33.654408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:33.654419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:38:33.654459 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.654464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:33.654469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:38:33.654476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:33.654489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654558 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.654563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.654568 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.654573 | orchestrator | 2025-06-03 15:38:33.654578 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-03 15:38:33.654582 | orchestrator | Tuesday 03 June 2025 15:34:10 +0000 (0:00:00.922) 0:02:08.179 ********** 2025-06-03 15:38:33.654587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:33.654592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:33.654597 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.654602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:33.654607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:33.654611 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.654616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:33.654621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:33.654625 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.654630 | orchestrator | 2025-06-03 15:38:33.654660 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-03 15:38:33.654666 | orchestrator | Tuesday 03 June 2025 15:34:11 +0000 (0:00:01.063) 0:02:09.243 ********** 2025-06-03 15:38:33.654670 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.654675 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.654680 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.654684 | orchestrator | 2025-06-03 15:38:33.654692 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-03 15:38:33.654697 | orchestrator | Tuesday 03 June 2025 15:34:13 +0000 (0:00:01.732) 0:02:10.976 ********** 2025-06-03 15:38:33.654702 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.654706 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.654711 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.654716 | orchestrator | 2025-06-03 15:38:33.654720 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-03 15:38:33.654730 | orchestrator | Tuesday 03 June 2025 15:34:15 +0000 (0:00:01.970) 0:02:12.946 ********** 2025-06-03 15:38:33.654735 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.654739 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.654744 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.654749 | orchestrator | 2025-06-03 15:38:33.654753 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-03 15:38:33.654758 | orchestrator | Tuesday 03 June 2025 15:34:15 +0000 (0:00:00.295) 0:02:13.242 ********** 2025-06-03 15:38:33.654763 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.654768 | orchestrator | 2025-06-03 15:38:33.654772 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-03 15:38:33.654777 | orchestrator | Tuesday 03 June 2025 15:34:16 +0000 (0:00:00.812) 0:02:14.054 ********** 2025-06-03 15:38:33.654794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:38:33.654804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.654819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:38:33.654825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.654840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:38:33.654846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.654851 | orchestrator | 2025-06-03 15:38:33.654856 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-03 15:38:33.654861 | orchestrator | Tuesday 03 June 2025 15:34:21 +0000 (0:00:04.765) 0:02:18.820 ********** 2025-06-03 15:38:33.654876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:38:33.654882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.654887 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.654895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:38:33.654908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.654914 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.654921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:38:33.654936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.654943 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.654947 | orchestrator | 2025-06-03 15:38:33.654952 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-03 15:38:33.654957 | orchestrator | Tuesday 03 June 2025 15:34:25 +0000 (0:00:04.311) 0:02:23.131 ********** 2025-06-03 15:38:33.654962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:33.654968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:33.654978 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.654985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:33.654991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:33.654996 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.655001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:33.655010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:33.655016 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.655020 | orchestrator | 2025-06-03 15:38:33.655025 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-03 15:38:33.655030 | orchestrator | Tuesday 03 June 2025 15:34:29 +0000 (0:00:03.983) 0:02:27.115 ********** 2025-06-03 15:38:33.655035 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.655040 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.655044 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.655049 | orchestrator | 2025-06-03 15:38:33.655054 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-03 15:38:33.655058 | orchestrator | Tuesday 03 June 2025 15:34:30 +0000 (0:00:01.470) 0:02:28.586 ********** 2025-06-03 15:38:33.655063 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.655068 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.655072 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.655077 | orchestrator | 2025-06-03 15:38:33.655082 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-03 15:38:33.655087 | orchestrator | Tuesday 03 June 2025 15:34:32 +0000 (0:00:01.917) 0:02:30.503 ********** 2025-06-03 15:38:33.655091 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.655096 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.655100 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.655109 | orchestrator | 2025-06-03 15:38:33.655114 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-03 15:38:33.655118 | orchestrator | Tuesday 03 June 2025 15:34:33 +0000 (0:00:00.292) 0:02:30.795 ********** 2025-06-03 15:38:33.655123 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.655128 | orchestrator | 2025-06-03 15:38:33.655132 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-03 15:38:33.655137 | orchestrator | Tuesday 03 June 2025 15:34:33 +0000 (0:00:00.866) 0:02:31.662 ********** 2025-06-03 15:38:33.655142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:38:33.655150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:38:33.655155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:38:33.655160 | orchestrator | 2025-06-03 15:38:33.655165 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-03 15:38:33.655169 | orchestrator | Tuesday 03 June 2025 15:34:37 +0000 (0:00:03.736) 0:02:35.398 ********** 2025-06-03 15:38:33.655178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:38:33.655183 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.655188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:38:33.655197 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.655202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:38:33.655206 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.655211 | orchestrator | 2025-06-03 15:38:33.655216 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-03 15:38:33.655221 | orchestrator | Tuesday 03 June 2025 15:34:38 +0000 (0:00:00.442) 0:02:35.841 ********** 2025-06-03 15:38:33.655226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:33.655230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:33.655235 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.655242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:33.655247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:33.655252 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.655257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:33.655261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:33.655266 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.655271 | orchestrator | 2025-06-03 15:38:33.655275 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-03 15:38:33.655280 | orchestrator | Tuesday 03 June 2025 15:34:38 +0000 (0:00:00.725) 0:02:36.566 ********** 2025-06-03 15:38:33.655285 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.655290 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.655294 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.655299 | orchestrator | 2025-06-03 15:38:33.655303 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-03 15:38:33.655308 | orchestrator | Tuesday 03 June 2025 15:34:40 +0000 (0:00:01.777) 0:02:38.343 ********** 2025-06-03 15:38:33.655313 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.655318 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.655323 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.655327 | orchestrator | 2025-06-03 15:38:33.655336 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-03 15:38:33.655345 | orchestrator | Tuesday 03 June 2025 15:34:42 +0000 (0:00:02.167) 0:02:40.510 ********** 2025-06-03 15:38:33.655350 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.655355 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.655359 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.655364 | orchestrator | 2025-06-03 15:38:33.655368 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-03 15:38:33.655373 | orchestrator | Tuesday 03 June 2025 15:34:43 +0000 (0:00:00.332) 0:02:40.842 ********** 2025-06-03 15:38:33.655378 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.655382 | orchestrator | 2025-06-03 15:38:33.655387 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-03 15:38:33.655391 | orchestrator | Tuesday 03 June 2025 15:34:44 +0000 (0:00:00.988) 0:02:41.831 ********** 2025-06-03 15:38:33.655399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:38:33.655409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:38:33.655422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:38:33.655428 | orchestrator | 2025-06-03 15:38:33.655433 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-03 15:38:33.655438 | orchestrator | Tuesday 03 June 2025 15:34:48 +0000 (0:00:04.429) 0:02:46.260 ********** 2025-06-03 15:38:33.655447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:38:33.655457 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.655465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:38:33.655473 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.655483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:38:33.655489 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.655494 | orchestrator | 2025-06-03 15:38:33.655498 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-03 15:38:33.655503 | orchestrator | Tuesday 03 June 2025 15:34:49 +0000 (0:00:00.677) 0:02:46.938 ********** 2025-06-03 15:38:33.655508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:33.655515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:33.655524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:33.655530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:33.655535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-03 15:38:33.655543 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.655548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:33.655552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:33.655560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:33.655566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:33.655570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-03 15:38:33.655575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:33.655580 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.655584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:33.655589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:33.655594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:33.655599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-03 15:38:33.655604 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.655609 | orchestrator | 2025-06-03 15:38:33.655613 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-03 15:38:33.655618 | orchestrator | Tuesday 03 June 2025 15:34:50 +0000 (0:00:01.141) 0:02:48.080 ********** 2025-06-03 15:38:33.655623 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.655627 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.655632 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.655661 | orchestrator | 2025-06-03 15:38:33.655666 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-03 15:38:33.655674 | orchestrator | Tuesday 03 June 2025 15:34:51 +0000 (0:00:01.603) 0:02:49.683 ********** 2025-06-03 15:38:33.655678 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.655683 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.655688 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.655693 | orchestrator | 2025-06-03 15:38:33.655697 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-03 15:38:33.655702 | orchestrator | Tuesday 03 June 2025 15:34:54 +0000 (0:00:02.216) 0:02:51.900 ********** 2025-06-03 15:38:33.655707 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.655712 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.655716 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.655721 | orchestrator | 2025-06-03 15:38:33.655725 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-03 15:38:33.655730 | orchestrator | Tuesday 03 June 2025 15:34:54 +0000 (0:00:00.324) 0:02:52.225 ********** 2025-06-03 15:38:33.655735 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.655739 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.655744 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.655749 | orchestrator | 2025-06-03 15:38:33.655753 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-03 15:38:33.655758 | orchestrator | Tuesday 03 June 2025 15:34:54 +0000 (0:00:00.321) 0:02:52.547 ********** 2025-06-03 15:38:33.655763 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.655768 | orchestrator | 2025-06-03 15:38:33.655773 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-03 15:38:33.655777 | orchestrator | Tuesday 03 June 2025 15:34:56 +0000 (0:00:01.184) 0:02:53.731 ********** 2025-06-03 15:38:33.655786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:38:33.655793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:38:33.655802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:33.655810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:33.655815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:33.655823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:33.655829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:38:33.655834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:33.655845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:33.655850 | orchestrator | 2025-06-03 15:38:33.655854 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-03 15:38:33.655859 | orchestrator | Tuesday 03 June 2025 15:34:59 +0000 (0:00:03.296) 0:02:57.028 ********** 2025-06-03 15:38:33.655866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:38:33.655875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:33.655880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:33.655885 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.655890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:38:33.655899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:33.655906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:33.655911 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.656448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:38:33.656468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:33.656474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:33.656485 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.656490 | orchestrator | 2025-06-03 15:38:33.656495 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-03 15:38:33.656500 | orchestrator | Tuesday 03 June 2025 15:34:59 +0000 (0:00:00.575) 0:02:57.604 ********** 2025-06-03 15:38:33.656505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:33.656510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:33.656515 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.656520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:33.656528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:33.656534 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.656556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:33.656562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:33.656567 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.656571 | orchestrator | 2025-06-03 15:38:33.656576 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-03 15:38:33.656580 | orchestrator | Tuesday 03 June 2025 15:35:01 +0000 (0:00:01.144) 0:02:58.748 ********** 2025-06-03 15:38:33.656585 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.656589 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.656594 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.656598 | orchestrator | 2025-06-03 15:38:33.656603 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-03 15:38:33.656608 | orchestrator | Tuesday 03 June 2025 15:35:02 +0000 (0:00:01.263) 0:03:00.011 ********** 2025-06-03 15:38:33.656612 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.656617 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.656621 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.656626 | orchestrator | 2025-06-03 15:38:33.656630 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-03 15:38:33.656659 | orchestrator | Tuesday 03 June 2025 15:35:04 +0000 (0:00:02.056) 0:03:02.068 ********** 2025-06-03 15:38:33.656664 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.656669 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.656674 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.656678 | orchestrator | 2025-06-03 15:38:33.656683 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-03 15:38:33.656688 | orchestrator | Tuesday 03 June 2025 15:35:04 +0000 (0:00:00.317) 0:03:02.385 ********** 2025-06-03 15:38:33.656696 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.656701 | orchestrator | 2025-06-03 15:38:33.656705 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-03 15:38:33.656710 | orchestrator | Tuesday 03 June 2025 15:35:05 +0000 (0:00:01.259) 0:03:03.644 ********** 2025-06-03 15:38:33.656715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:38:33.656721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.656728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:38:33.656734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.656742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:38:33.656750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.656755 | orchestrator | 2025-06-03 15:38:33.656760 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-03 15:38:33.656765 | orchestrator | Tuesday 03 June 2025 15:35:09 +0000 (0:00:03.537) 0:03:07.182 ********** 2025-06-03 15:38:33.656770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:38:33.656777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.656782 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.656789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:38:33.656797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.656802 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.656807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:38:33.656812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.656816 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.656821 | orchestrator | 2025-06-03 15:38:33.656826 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-03 15:38:33.656832 | orchestrator | Tuesday 03 June 2025 15:35:10 +0000 (0:00:00.696) 0:03:07.878 ********** 2025-06-03 15:38:33.656837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:33.656843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:33.656848 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.656852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:33.656857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:33.656864 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.656869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:33.656873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:33.656881 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.656886 | orchestrator | 2025-06-03 15:38:33.656890 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-03 15:38:33.656895 | orchestrator | Tuesday 03 June 2025 15:35:11 +0000 (0:00:01.403) 0:03:09.282 ********** 2025-06-03 15:38:33.656899 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.656904 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.656908 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.656913 | orchestrator | 2025-06-03 15:38:33.656917 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-03 15:38:33.656922 | orchestrator | Tuesday 03 June 2025 15:35:12 +0000 (0:00:01.272) 0:03:10.555 ********** 2025-06-03 15:38:33.656927 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.656931 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.656936 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.656940 | orchestrator | 2025-06-03 15:38:33.656945 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-03 15:38:33.656949 | orchestrator | Tuesday 03 June 2025 15:35:15 +0000 (0:00:02.184) 0:03:12.739 ********** 2025-06-03 15:38:33.656954 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.656958 | orchestrator | 2025-06-03 15:38:33.656963 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-03 15:38:33.656968 | orchestrator | Tuesday 03 June 2025 15:35:15 +0000 (0:00:00.951) 0:03:13.691 ********** 2025-06-03 15:38:33.656973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-03 15:38:33.656978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.656985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.656993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-03 15:38:33.657007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-03 15:38:33.657029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657048 | orchestrator | 2025-06-03 15:38:33.657052 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-03 15:38:33.657067 | orchestrator | Tuesday 03 June 2025 15:35:19 +0000 (0:00:03.357) 0:03:17.048 ********** 2025-06-03 15:38:33.657072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-03 15:38:33.657077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657098 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-03 15:38:33.657111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657128 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-03 15:38:33.657143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.657158 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657162 | orchestrator | 2025-06-03 15:38:33.657167 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-03 15:38:33.657172 | orchestrator | Tuesday 03 June 2025 15:35:20 +0000 (0:00:00.729) 0:03:17.778 ********** 2025-06-03 15:38:33.657176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:33.657181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:33.657188 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:33.657197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:33.657202 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:33.657213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:33.657218 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657223 | orchestrator | 2025-06-03 15:38:33.657227 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-03 15:38:33.657232 | orchestrator | Tuesday 03 June 2025 15:35:21 +0000 (0:00:01.037) 0:03:18.815 ********** 2025-06-03 15:38:33.657236 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.657241 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.657245 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.657250 | orchestrator | 2025-06-03 15:38:33.657255 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-03 15:38:33.657259 | orchestrator | Tuesday 03 June 2025 15:35:22 +0000 (0:00:01.653) 0:03:20.469 ********** 2025-06-03 15:38:33.657264 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.657268 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.657273 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.657277 | orchestrator | 2025-06-03 15:38:33.657282 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-03 15:38:33.657286 | orchestrator | Tuesday 03 June 2025 15:35:24 +0000 (0:00:01.881) 0:03:22.351 ********** 2025-06-03 15:38:33.657291 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.657295 | orchestrator | 2025-06-03 15:38:33.657300 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-03 15:38:33.657305 | orchestrator | Tuesday 03 June 2025 15:35:25 +0000 (0:00:00.984) 0:03:23.336 ********** 2025-06-03 15:38:33.657309 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:38:33.657314 | orchestrator | 2025-06-03 15:38:33.657319 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-03 15:38:33.657323 | orchestrator | Tuesday 03 June 2025 15:35:28 +0000 (0:00:03.155) 0:03:26.491 ********** 2025-06-03 15:38:33.657333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:33.657343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:33.657348 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:33.657364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:33.657372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:33.657379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:33.657384 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657388 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657393 | orchestrator | 2025-06-03 15:38:33.657397 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-03 15:38:33.657402 | orchestrator | Tuesday 03 June 2025 15:35:30 +0000 (0:00:02.108) 0:03:28.600 ********** 2025-06-03 15:38:33.657411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:33.657419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:33.657424 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:33.657439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:33.657444 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:33.657460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:33.657465 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657469 | orchestrator | 2025-06-03 15:38:33.657474 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-03 15:38:33.657479 | orchestrator | Tuesday 03 June 2025 15:35:32 +0000 (0:00:01.827) 0:03:30.427 ********** 2025-06-03 15:38:33.657483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:33.657491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:33.657496 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:33.657508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:33.657513 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:33.657522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:33.657527 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657532 | orchestrator | 2025-06-03 15:38:33.657537 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-03 15:38:33.657541 | orchestrator | Tuesday 03 June 2025 15:35:34 +0000 (0:00:02.165) 0:03:32.593 ********** 2025-06-03 15:38:33.657546 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.657552 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.657557 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.657562 | orchestrator | 2025-06-03 15:38:33.657566 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-03 15:38:33.657571 | orchestrator | Tuesday 03 June 2025 15:35:37 +0000 (0:00:02.194) 0:03:34.787 ********** 2025-06-03 15:38:33.657575 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657580 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657584 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657589 | orchestrator | 2025-06-03 15:38:33.657593 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-03 15:38:33.657598 | orchestrator | Tuesday 03 June 2025 15:35:38 +0000 (0:00:01.291) 0:03:36.078 ********** 2025-06-03 15:38:33.657602 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657607 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657611 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657616 | orchestrator | 2025-06-03 15:38:33.657620 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-03 15:38:33.657629 | orchestrator | Tuesday 03 June 2025 15:35:38 +0000 (0:00:00.263) 0:03:36.342 ********** 2025-06-03 15:38:33.657679 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.657684 | orchestrator | 2025-06-03 15:38:33.657689 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-03 15:38:33.657693 | orchestrator | Tuesday 03 June 2025 15:35:39 +0000 (0:00:01.038) 0:03:37.380 ********** 2025-06-03 15:38:33.657703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-03 15:38:33.657708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-03 15:38:33.657713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-03 15:38:33.657718 | orchestrator | 2025-06-03 15:38:33.657722 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-03 15:38:33.657727 | orchestrator | Tuesday 03 June 2025 15:35:41 +0000 (0:00:01.663) 0:03:39.044 ********** 2025-06-03 15:38:33.657734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-03 15:38:33.657739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-03 15:38:33.657747 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657751 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-03 15:38:33.657765 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657769 | orchestrator | 2025-06-03 15:38:33.657774 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-03 15:38:33.657778 | orchestrator | Tuesday 03 June 2025 15:35:41 +0000 (0:00:00.437) 0:03:39.482 ********** 2025-06-03 15:38:33.657783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-03 15:38:33.657788 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-03 15:38:33.657797 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-03 15:38:33.657807 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657811 | orchestrator | 2025-06-03 15:38:33.657816 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-03 15:38:33.657820 | orchestrator | Tuesday 03 June 2025 15:35:42 +0000 (0:00:00.644) 0:03:40.127 ********** 2025-06-03 15:38:33.657825 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657829 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657834 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657839 | orchestrator | 2025-06-03 15:38:33.657843 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-03 15:38:33.657848 | orchestrator | Tuesday 03 June 2025 15:35:43 +0000 (0:00:00.811) 0:03:40.938 ********** 2025-06-03 15:38:33.657852 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657857 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657861 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657866 | orchestrator | 2025-06-03 15:38:33.657870 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-03 15:38:33.657875 | orchestrator | Tuesday 03 June 2025 15:35:44 +0000 (0:00:01.458) 0:03:42.397 ********** 2025-06-03 15:38:33.657882 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.657887 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.657891 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.657896 | orchestrator | 2025-06-03 15:38:33.657900 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-03 15:38:33.657905 | orchestrator | Tuesday 03 June 2025 15:35:45 +0000 (0:00:00.542) 0:03:42.940 ********** 2025-06-03 15:38:33.657910 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.657914 | orchestrator | 2025-06-03 15:38:33.657921 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-03 15:38:33.657925 | orchestrator | Tuesday 03 June 2025 15:35:46 +0000 (0:00:01.515) 0:03:44.455 ********** 2025-06-03 15:38:33.657930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:38:33.658182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:33.658218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:38:33.658276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.658297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:38:33.658324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:33.658338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:33.658381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.658484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.658489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658516 | orchestrator | 2025-06-03 15:38:33.658521 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-03 15:38:33.658526 | orchestrator | Tuesday 03 June 2025 15:35:51 +0000 (0:00:05.148) 0:03:49.603 ********** 2025-06-03 15:38:33.658531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:38:33.658539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:33.658564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.658626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658657 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.658662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:38:33.658669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:38:33.658677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:33.658726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:33.658731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.658866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:33.658873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.658878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.658889 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.658905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:33.659521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:33.659553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.659561 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.659568 | orchestrator | 2025-06-03 15:38:33.659576 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-03 15:38:33.659583 | orchestrator | Tuesday 03 June 2025 15:35:54 +0000 (0:00:02.155) 0:03:51.759 ********** 2025-06-03 15:38:33.659591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:33.659598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:33.659607 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.659614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:33.659622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:33.659630 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.659661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:33.659665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:33.659670 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.659674 | orchestrator | 2025-06-03 15:38:33.659679 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-03 15:38:33.659684 | orchestrator | Tuesday 03 June 2025 15:35:56 +0000 (0:00:02.611) 0:03:54.371 ********** 2025-06-03 15:38:33.659688 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.659693 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.659697 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.659703 | orchestrator | 2025-06-03 15:38:33.659716 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-03 15:38:33.659721 | orchestrator | Tuesday 03 June 2025 15:35:58 +0000 (0:00:01.445) 0:03:55.817 ********** 2025-06-03 15:38:33.659726 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.659731 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.659735 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.659740 | orchestrator | 2025-06-03 15:38:33.659745 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-03 15:38:33.659754 | orchestrator | Tuesday 03 June 2025 15:36:00 +0000 (0:00:02.381) 0:03:58.198 ********** 2025-06-03 15:38:33.659760 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.659764 | orchestrator | 2025-06-03 15:38:33.659769 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-03 15:38:33.659774 | orchestrator | Tuesday 03 June 2025 15:36:01 +0000 (0:00:01.211) 0:03:59.410 ********** 2025-06-03 15:38:33.659786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.659792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.659797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.659802 | orchestrator | 2025-06-03 15:38:33.659807 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-03 15:38:33.659813 | orchestrator | Tuesday 03 June 2025 15:36:06 +0000 (0:00:04.549) 0:04:03.959 ********** 2025-06-03 15:38:33.659823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.659828 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.659833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.659841 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.659846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.659851 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.659856 | orchestrator | 2025-06-03 15:38:33.659861 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-03 15:38:33.659866 | orchestrator | Tuesday 03 June 2025 15:36:06 +0000 (0:00:00.496) 0:04:04.455 ********** 2025-06-03 15:38:33.659871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:33.659876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:33.659882 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.659887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:33.659895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:33.659900 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.659905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:33.659909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:33.659914 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.659919 | orchestrator | 2025-06-03 15:38:33.659924 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-03 15:38:33.659929 | orchestrator | Tuesday 03 June 2025 15:36:07 +0000 (0:00:00.765) 0:04:05.221 ********** 2025-06-03 15:38:33.659934 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.659939 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.659944 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.659949 | orchestrator | 2025-06-03 15:38:33.659954 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-03 15:38:33.659958 | orchestrator | Tuesday 03 June 2025 15:36:08 +0000 (0:00:01.435) 0:04:06.656 ********** 2025-06-03 15:38:33.659963 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.659970 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.659975 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.659980 | orchestrator | 2025-06-03 15:38:33.659985 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-03 15:38:33.659990 | orchestrator | Tuesday 03 June 2025 15:36:10 +0000 (0:00:01.870) 0:04:08.527 ********** 2025-06-03 15:38:33.659995 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.659999 | orchestrator | 2025-06-03 15:38:33.660004 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-03 15:38:33.660009 | orchestrator | Tuesday 03 June 2025 15:36:11 +0000 (0:00:01.150) 0:04:09.677 ********** 2025-06-03 15:38:33.660018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.660024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.660045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.660067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660077 | orchestrator | 2025-06-03 15:38:33.660082 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-03 15:38:33.660087 | orchestrator | Tuesday 03 June 2025 15:36:16 +0000 (0:00:04.657) 0:04:14.335 ********** 2025-06-03 15:38:33.660097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.660103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660117 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.660122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.660130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660142 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.660153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.660165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.660176 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.660181 | orchestrator | 2025-06-03 15:38:33.660187 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-03 15:38:33.660192 | orchestrator | Tuesday 03 June 2025 15:36:17 +0000 (0:00:00.788) 0:04:15.123 ********** 2025-06-03 15:38:33.660198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660224 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.660229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660259 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.660265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:33.660287 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.660293 | orchestrator | 2025-06-03 15:38:33.660298 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-03 15:38:33.660304 | orchestrator | Tuesday 03 June 2025 15:36:18 +0000 (0:00:00.776) 0:04:15.900 ********** 2025-06-03 15:38:33.660309 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.660315 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.660320 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.660325 | orchestrator | 2025-06-03 15:38:33.660331 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-03 15:38:33.660336 | orchestrator | Tuesday 03 June 2025 15:36:20 +0000 (0:00:01.840) 0:04:17.740 ********** 2025-06-03 15:38:33.660342 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.660347 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.660352 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.660358 | orchestrator | 2025-06-03 15:38:33.660363 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-03 15:38:33.660369 | orchestrator | Tuesday 03 June 2025 15:36:22 +0000 (0:00:02.243) 0:04:19.984 ********** 2025-06-03 15:38:33.660374 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.660380 | orchestrator | 2025-06-03 15:38:33.660385 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-03 15:38:33.660391 | orchestrator | Tuesday 03 June 2025 15:36:23 +0000 (0:00:01.710) 0:04:21.695 ********** 2025-06-03 15:38:33.660397 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-03 15:38:33.660402 | orchestrator | 2025-06-03 15:38:33.660408 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-03 15:38:33.660413 | orchestrator | Tuesday 03 June 2025 15:36:25 +0000 (0:00:01.142) 0:04:22.837 ********** 2025-06-03 15:38:33.660419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-03 15:38:33.660427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-03 15:38:33.660437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-03 15:38:33.660443 | orchestrator | 2025-06-03 15:38:33.660449 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-03 15:38:33.660454 | orchestrator | Tuesday 03 June 2025 15:36:29 +0000 (0:00:03.927) 0:04:26.765 ********** 2025-06-03 15:38:33.660463 | orchestrator | [02025-06-03 15:38:33 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:38:33.660469 | orchestrator | ;36mskipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:33.660475 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.660481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:33.660487 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.660492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:33.660497 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.660502 | orchestrator | 2025-06-03 15:38:33.660507 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-03 15:38:33.660512 | orchestrator | Tuesday 03 June 2025 15:36:30 +0000 (0:00:01.352) 0:04:28.117 ********** 2025-06-03 15:38:33.660517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:33.660522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:33.660527 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.660532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:33.660538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:33.660546 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.660553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:33.660558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:33.660563 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.660568 | orchestrator | 2025-06-03 15:38:33.660573 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-03 15:38:33.660578 | orchestrator | Tuesday 03 June 2025 15:36:32 +0000 (0:00:01.827) 0:04:29.944 ********** 2025-06-03 15:38:33.660583 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.660588 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.660592 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.660597 | orchestrator | 2025-06-03 15:38:33.660602 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-03 15:38:33.660607 | orchestrator | Tuesday 03 June 2025 15:36:34 +0000 (0:00:02.212) 0:04:32.157 ********** 2025-06-03 15:38:33.660612 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.660617 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.660622 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.660627 | orchestrator | 2025-06-03 15:38:33.660651 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-03 15:38:33.660656 | orchestrator | Tuesday 03 June 2025 15:36:37 +0000 (0:00:02.815) 0:04:34.972 ********** 2025-06-03 15:38:33.660662 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-03 15:38:33.660666 | orchestrator | 2025-06-03 15:38:33.660671 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-03 15:38:33.660676 | orchestrator | Tuesday 03 June 2025 15:36:37 +0000 (0:00:00.736) 0:04:35.708 ********** 2025-06-03 15:38:33.660681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:33.660687 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.660692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:33.660697 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.660702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:33.660710 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.660715 | orchestrator | 2025-06-03 15:38:33.660719 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-03 15:38:33.660724 | orchestrator | Tuesday 03 June 2025 15:36:39 +0000 (0:00:01.115) 0:04:36.824 ********** 2025-06-03 15:38:33.660730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:33.660735 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.660742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:33.660747 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.660752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:33.660757 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.660762 | orchestrator | 2025-06-03 15:38:33.660782 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-03 15:38:33.660787 | orchestrator | Tuesday 03 June 2025 15:36:40 +0000 (0:00:01.317) 0:04:38.142 ********** 2025-06-03 15:38:33.660792 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.660797 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.660801 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.660806 | orchestrator | 2025-06-03 15:38:33.660811 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-03 15:38:33.660816 | orchestrator | Tuesday 03 June 2025 15:36:41 +0000 (0:00:01.106) 0:04:39.249 ********** 2025-06-03 15:38:33.660821 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.660826 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.660831 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.660836 | orchestrator | 2025-06-03 15:38:33.660840 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-03 15:38:33.660845 | orchestrator | Tuesday 03 June 2025 15:36:43 +0000 (0:00:02.213) 0:04:41.462 ********** 2025-06-03 15:38:33.660850 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.660855 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.660860 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.660865 | orchestrator | 2025-06-03 15:38:33.660870 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-03 15:38:33.660874 | orchestrator | Tuesday 03 June 2025 15:36:46 +0000 (0:00:02.854) 0:04:44.317 ********** 2025-06-03 15:38:33.660879 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-03 15:38:33.660887 | orchestrator | 2025-06-03 15:38:33.660892 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-03 15:38:33.660909 | orchestrator | Tuesday 03 June 2025 15:36:47 +0000 (0:00:00.877) 0:04:45.194 ********** 2025-06-03 15:38:33.660915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:33.660920 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.660925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:33.660930 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.660935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:33.660940 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.660945 | orchestrator | 2025-06-03 15:38:33.660952 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-03 15:38:33.660957 | orchestrator | Tuesday 03 June 2025 15:36:48 +0000 (0:00:00.911) 0:04:46.106 ********** 2025-06-03 15:38:33.660963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:33.660967 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.660976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:33.660981 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.660986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:33.660994 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.660999 | orchestrator | 2025-06-03 15:38:33.661004 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-03 15:38:33.661009 | orchestrator | Tuesday 03 June 2025 15:36:49 +0000 (0:00:01.075) 0:04:47.181 ********** 2025-06-03 15:38:33.661014 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.661019 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.661023 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.661028 | orchestrator | 2025-06-03 15:38:33.661033 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-03 15:38:33.661038 | orchestrator | Tuesday 03 June 2025 15:36:50 +0000 (0:00:01.484) 0:04:48.666 ********** 2025-06-03 15:38:33.661043 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.661048 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.661053 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.661057 | orchestrator | 2025-06-03 15:38:33.661062 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-03 15:38:33.661067 | orchestrator | Tuesday 03 June 2025 15:36:53 +0000 (0:00:02.222) 0:04:50.888 ********** 2025-06-03 15:38:33.661072 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.661077 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.661082 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.661087 | orchestrator | 2025-06-03 15:38:33.661092 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-03 15:38:33.661097 | orchestrator | Tuesday 03 June 2025 15:36:55 +0000 (0:00:02.748) 0:04:53.637 ********** 2025-06-03 15:38:33.661101 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.661106 | orchestrator | 2025-06-03 15:38:33.661111 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-03 15:38:33.661116 | orchestrator | Tuesday 03 June 2025 15:36:57 +0000 (0:00:01.285) 0:04:54.923 ********** 2025-06-03 15:38:33.661121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.661129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:33.661137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.661157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.661162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:33.661169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.661190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.661196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:33.661201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.661222 | orchestrator | 2025-06-03 15:38:33.661228 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-03 15:38:33.661233 | orchestrator | Tuesday 03 June 2025 15:37:00 +0000 (0:00:03.760) 0:04:58.684 ********** 2025-06-03 15:38:33.661242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.661247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:33.661252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.661274 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.661289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.661294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:33.661299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.661315 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.661322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.661330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:33.661339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': T2025-06-03 15:38:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:33.661346 | orchestrator | rue, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:33.661356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:33.661361 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.661366 | orchestrator | 2025-06-03 15:38:33.661371 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-03 15:38:33.661376 | orchestrator | Tuesday 03 June 2025 15:37:01 +0000 (0:00:00.682) 0:04:59.366 ********** 2025-06-03 15:38:33.661381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:33.661386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:33.661391 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.661399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:33.661403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:33.661408 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.661416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:33.661421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:33.661426 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.661431 | orchestrator | 2025-06-03 15:38:33.661436 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-03 15:38:33.661441 | orchestrator | Tuesday 03 June 2025 15:37:02 +0000 (0:00:01.020) 0:05:00.387 ********** 2025-06-03 15:38:33.661446 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.661451 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.661455 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.661460 | orchestrator | 2025-06-03 15:38:33.661465 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-03 15:38:33.661470 | orchestrator | Tuesday 03 June 2025 15:37:04 +0000 (0:00:01.632) 0:05:02.019 ********** 2025-06-03 15:38:33.661475 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.661480 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.661487 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.661492 | orchestrator | 2025-06-03 15:38:33.661497 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-03 15:38:33.661502 | orchestrator | Tuesday 03 June 2025 15:37:06 +0000 (0:00:02.016) 0:05:04.035 ********** 2025-06-03 15:38:33.661507 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.661512 | orchestrator | 2025-06-03 15:38:33.661517 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-03 15:38:33.661522 | orchestrator | Tuesday 03 June 2025 15:37:07 +0000 (0:00:01.342) 0:05:05.378 ********** 2025-06-03 15:38:33.661527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:38:33.661533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:38:33.661542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:38:33.661549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:38:33.661559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:38:33.661565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:38:33.661574 | orchestrator | 2025-06-03 15:38:33.661579 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-03 15:38:33.661584 | orchestrator | Tuesday 03 June 2025 15:37:12 +0000 (0:00:04.974) 0:05:10.352 ********** 2025-06-03 15:38:33.661591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:38:33.661599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:38:33.661605 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.661610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:38:33.661615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:38:33.661623 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.661628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:38:33.661652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:38:33.661658 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.661663 | orchestrator | 2025-06-03 15:38:33.661671 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-03 15:38:33.661676 | orchestrator | Tuesday 03 June 2025 15:37:13 +0000 (0:00:00.906) 0:05:11.258 ********** 2025-06-03 15:38:33.661681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-03 15:38:33.661686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:33.661692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:33.661697 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.661702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-03 15:38:33.661707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:33.661715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:33.661720 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.661725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-03 15:38:33.661730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:33.661735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:33.661740 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.661745 | orchestrator | 2025-06-03 15:38:33.661750 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-03 15:38:33.661755 | orchestrator | Tuesday 03 June 2025 15:37:14 +0000 (0:00:00.862) 0:05:12.120 ********** 2025-06-03 15:38:33.661759 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.661764 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.661769 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.661774 | orchestrator | 2025-06-03 15:38:33.661779 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-03 15:38:33.661784 | orchestrator | Tuesday 03 June 2025 15:37:14 +0000 (0:00:00.443) 0:05:12.564 ********** 2025-06-03 15:38:33.661789 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.661794 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.661798 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.661803 | orchestrator | 2025-06-03 15:38:33.661808 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-03 15:38:33.661813 | orchestrator | Tuesday 03 June 2025 15:37:16 +0000 (0:00:01.444) 0:05:14.008 ********** 2025-06-03 15:38:33.661820 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.661825 | orchestrator | 2025-06-03 15:38:33.661830 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-03 15:38:33.661835 | orchestrator | Tuesday 03 June 2025 15:37:17 +0000 (0:00:01.516) 0:05:15.525 ********** 2025-06-03 15:38:33.661840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:38:33.661849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:33.661860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.661865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.661870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:38:33.661876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.661883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:33.661888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.661897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.661905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.661910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:38:33.661915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:33.661920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.661928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.661933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.661942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:38:33.661950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:38:33.661955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:33.661963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.661968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:33.661983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.661988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.661993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:38:33.661999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.662011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.662050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:33.662059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.662076 | orchestrator | 2025-06-03 15:38:33.662081 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-03 15:38:33.662086 | orchestrator | Tuesday 03 June 2025 15:37:21 +0000 (0:00:04.144) 0:05:19.669 ********** 2025-06-03 15:38:33.662092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-03 15:38:33.662108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:33.662114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.662140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-03 15:38:33.662145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:33.662153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.662175 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.662180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-03 15:38:33.662185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:33.662190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.662214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-03 15:38:33.662220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:33.662225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.662240 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.662245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-03 15:38:33.662255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:33.662263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.662288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-03 15:38:33.662309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:33.662318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:33.662332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:33.662337 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.662342 | orchestrator | 2025-06-03 15:38:33.662347 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-03 15:38:33.662351 | orchestrator | Tuesday 03 June 2025 15:37:23 +0000 (0:00:01.551) 0:05:21.221 ********** 2025-06-03 15:38:33.662356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-03 15:38:33.662361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-03 15:38:33.662367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:33.662372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:33.662378 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.662383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-03 15:38:33.662388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-03 15:38:33.662395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:33.662401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:33.662406 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.662412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-03 15:38:33.662417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-03 15:38:33.662422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:33.662431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:33.662436 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.662441 | orchestrator | 2025-06-03 15:38:33.662446 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-03 15:38:33.662451 | orchestrator | Tuesday 03 June 2025 15:37:24 +0000 (0:00:01.036) 0:05:22.258 ********** 2025-06-03 15:38:33.662456 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.662461 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.662465 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.662470 | orchestrator | 2025-06-03 15:38:33.662475 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-03 15:38:33.662480 | orchestrator | Tuesday 03 June 2025 15:37:24 +0000 (0:00:00.446) 0:05:22.704 ********** 2025-06-03 15:38:33.662484 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.662489 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.662494 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.662499 | orchestrator | 2025-06-03 15:38:33.662504 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-03 15:38:33.662509 | orchestrator | Tuesday 03 June 2025 15:37:26 +0000 (0:00:01.845) 0:05:24.550 ********** 2025-06-03 15:38:33.662513 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.662518 | orchestrator | 2025-06-03 15:38:33.662523 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-03 15:38:33.662528 | orchestrator | Tuesday 03 June 2025 15:37:28 +0000 (0:00:01.700) 0:05:26.250 ********** 2025-06-03 15:38:33.662533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:38:33.662542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:38:33.662550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:38:33.662555 | orchestrator | 2025-06-03 15:38:33.662560 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-03 15:38:33.662582 | orchestrator | Tuesday 03 June 2025 15:37:31 +0000 (0:00:02.620) 0:05:28.870 ********** 2025-06-03 15:38:33.662587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-03 15:38:33.662592 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.662597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-03 15:38:33.662613 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.662621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-03 15:38:33.662626 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.662631 | orchestrator | 2025-06-03 15:38:33.662718 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-03 15:38:33.662724 | orchestrator | Tuesday 03 June 2025 15:37:31 +0000 (0:00:00.388) 0:05:29.259 ********** 2025-06-03 15:38:33.662729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-03 15:38:33.662734 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.662739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-03 15:38:33.662744 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.662748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-03 15:38:33.662753 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.662758 | orchestrator | 2025-06-03 15:38:33.662763 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-03 15:38:33.662768 | orchestrator | Tuesday 03 June 2025 15:37:32 +0000 (0:00:01.041) 0:05:30.301 ********** 2025-06-03 15:38:33.662793 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.662799 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.662804 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.662809 | orchestrator | 2025-06-03 15:38:33.662815 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-03 15:38:33.662819 | orchestrator | Tuesday 03 June 2025 15:37:33 +0000 (0:00:00.435) 0:05:30.736 ********** 2025-06-03 15:38:33.662824 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.662829 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.662834 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.662839 | orchestrator | 2025-06-03 15:38:33.662844 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-03 15:38:33.662853 | orchestrator | Tuesday 03 June 2025 15:37:34 +0000 (0:00:01.338) 0:05:32.075 ********** 2025-06-03 15:38:33.662857 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:33.662862 | orchestrator | 2025-06-03 15:38:33.662867 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-03 15:38:33.662872 | orchestrator | Tuesday 03 June 2025 15:37:36 +0000 (0:00:01.768) 0:05:33.844 ********** 2025-06-03 15:38:33.662877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.662882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.662891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.662901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.662910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.662916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:33.662921 | orchestrator | 2025-06-03 15:38:33.662926 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-03 15:38:33.662931 | orchestrator | Tuesday 03 June 2025 15:37:42 +0000 (0:00:06.260) 0:05:40.105 ********** 2025-06-03 15:38:33.662938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.662947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.662956 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.662961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.662966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.662971 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.662981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.662989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-03 15:38:33.663001 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663006 | orchestrator | 2025-06-03 15:38:33.663011 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-03 15:38:33.663015 | orchestrator | Tuesday 03 June 2025 15:37:43 +0000 (0:00:00.614) 0:05:40.719 ********** 2025-06-03 15:38:33.663029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663050 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663074 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:33.663099 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663104 | orchestrator | 2025-06-03 15:38:33.663111 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-03 15:38:33.663116 | orchestrator | Tuesday 03 June 2025 15:37:44 +0000 (0:00:01.675) 0:05:42.394 ********** 2025-06-03 15:38:33.663121 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.663130 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.663135 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.663140 | orchestrator | 2025-06-03 15:38:33.663144 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-03 15:38:33.663149 | orchestrator | Tuesday 03 June 2025 15:37:46 +0000 (0:00:01.351) 0:05:43.746 ********** 2025-06-03 15:38:33.663154 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.663159 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.663164 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.663169 | orchestrator | 2025-06-03 15:38:33.663174 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-03 15:38:33.663178 | orchestrator | Tuesday 03 June 2025 15:37:48 +0000 (0:00:02.171) 0:05:45.917 ********** 2025-06-03 15:38:33.663183 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663188 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663200 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663204 | orchestrator | 2025-06-03 15:38:33.663209 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-03 15:38:33.663221 | orchestrator | Tuesday 03 June 2025 15:37:48 +0000 (0:00:00.322) 0:05:46.240 ********** 2025-06-03 15:38:33.663233 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663248 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663260 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663265 | orchestrator | 2025-06-03 15:38:33.663270 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-03 15:38:33.663275 | orchestrator | Tuesday 03 June 2025 15:37:49 +0000 (0:00:00.642) 0:05:46.883 ********** 2025-06-03 15:38:33.663280 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663285 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663290 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663294 | orchestrator | 2025-06-03 15:38:33.663299 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-03 15:38:33.663304 | orchestrator | Tuesday 03 June 2025 15:37:49 +0000 (0:00:00.308) 0:05:47.191 ********** 2025-06-03 15:38:33.663309 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663314 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663319 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663323 | orchestrator | 2025-06-03 15:38:33.663328 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-03 15:38:33.663333 | orchestrator | Tuesday 03 June 2025 15:37:49 +0000 (0:00:00.320) 0:05:47.512 ********** 2025-06-03 15:38:33.663338 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663343 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663347 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663352 | orchestrator | 2025-06-03 15:38:33.663357 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-03 15:38:33.663362 | orchestrator | Tuesday 03 June 2025 15:37:50 +0000 (0:00:00.314) 0:05:47.826 ********** 2025-06-03 15:38:33.663367 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663372 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663376 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663381 | orchestrator | 2025-06-03 15:38:33.663386 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-03 15:38:33.663391 | orchestrator | Tuesday 03 June 2025 15:37:50 +0000 (0:00:00.838) 0:05:48.665 ********** 2025-06-03 15:38:33.663396 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.663401 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.663405 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.663410 | orchestrator | 2025-06-03 15:38:33.663415 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-03 15:38:33.663420 | orchestrator | Tuesday 03 June 2025 15:37:51 +0000 (0:00:00.695) 0:05:49.360 ********** 2025-06-03 15:38:33.663425 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.663430 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.663438 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.663443 | orchestrator | 2025-06-03 15:38:33.663448 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-03 15:38:33.663453 | orchestrator | Tuesday 03 June 2025 15:37:52 +0000 (0:00:00.385) 0:05:49.746 ********** 2025-06-03 15:38:33.663458 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.663462 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.663467 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.663472 | orchestrator | 2025-06-03 15:38:33.663477 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-03 15:38:33.663482 | orchestrator | Tuesday 03 June 2025 15:37:53 +0000 (0:00:01.245) 0:05:50.992 ********** 2025-06-03 15:38:33.663487 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.663491 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.663496 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.663501 | orchestrator | 2025-06-03 15:38:33.663506 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-03 15:38:33.663511 | orchestrator | Tuesday 03 June 2025 15:37:54 +0000 (0:00:00.936) 0:05:51.928 ********** 2025-06-03 15:38:33.663515 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.663520 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.663525 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.663530 | orchestrator | 2025-06-03 15:38:33.663534 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-03 15:38:33.663539 | orchestrator | Tuesday 03 June 2025 15:37:55 +0000 (0:00:00.869) 0:05:52.798 ********** 2025-06-03 15:38:33.663544 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.663549 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.663554 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.663559 | orchestrator | 2025-06-03 15:38:33.663564 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-03 15:38:33.663569 | orchestrator | Tuesday 03 June 2025 15:38:03 +0000 (0:00:08.474) 0:06:01.272 ********** 2025-06-03 15:38:33.663574 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.663578 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.663583 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.663588 | orchestrator | 2025-06-03 15:38:33.663596 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-03 15:38:33.663601 | orchestrator | Tuesday 03 June 2025 15:38:04 +0000 (0:00:00.711) 0:06:01.984 ********** 2025-06-03 15:38:33.663605 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.663610 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.663615 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.663620 | orchestrator | 2025-06-03 15:38:33.663625 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-03 15:38:33.663630 | orchestrator | Tuesday 03 June 2025 15:38:17 +0000 (0:00:13.078) 0:06:15.062 ********** 2025-06-03 15:38:33.663650 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.663655 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.663659 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.663664 | orchestrator | 2025-06-03 15:38:33.663669 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-03 15:38:33.663674 | orchestrator | Tuesday 03 June 2025 15:38:18 +0000 (0:00:00.754) 0:06:15.817 ********** 2025-06-03 15:38:33.663679 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:33.663684 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:33.663689 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:33.663693 | orchestrator | 2025-06-03 15:38:33.663698 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-03 15:38:33.663703 | orchestrator | Tuesday 03 June 2025 15:38:27 +0000 (0:00:09.139) 0:06:24.956 ********** 2025-06-03 15:38:33.663708 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663713 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663718 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663723 | orchestrator | 2025-06-03 15:38:33.663731 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-03 15:38:33.663741 | orchestrator | Tuesday 03 June 2025 15:38:27 +0000 (0:00:00.307) 0:06:25.264 ********** 2025-06-03 15:38:33.663746 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663751 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663755 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663760 | orchestrator | 2025-06-03 15:38:33.663765 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-03 15:38:33.663770 | orchestrator | Tuesday 03 June 2025 15:38:28 +0000 (0:00:00.567) 0:06:25.832 ********** 2025-06-03 15:38:33.663775 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663780 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663797 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663803 | orchestrator | 2025-06-03 15:38:33.663807 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-03 15:38:33.663812 | orchestrator | Tuesday 03 June 2025 15:38:28 +0000 (0:00:00.343) 0:06:26.176 ********** 2025-06-03 15:38:33.663817 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663822 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663827 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663831 | orchestrator | 2025-06-03 15:38:33.663836 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-03 15:38:33.663841 | orchestrator | Tuesday 03 June 2025 15:38:28 +0000 (0:00:00.310) 0:06:26.486 ********** 2025-06-03 15:38:33.663846 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663851 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663856 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663860 | orchestrator | 2025-06-03 15:38:33.663865 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-03 15:38:33.663870 | orchestrator | Tuesday 03 June 2025 15:38:29 +0000 (0:00:00.311) 0:06:26.798 ********** 2025-06-03 15:38:33.663875 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:33.663880 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:33.663885 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:33.663890 | orchestrator | 2025-06-03 15:38:33.663894 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-03 15:38:33.663899 | orchestrator | Tuesday 03 June 2025 15:38:29 +0000 (0:00:00.550) 0:06:27.349 ********** 2025-06-03 15:38:33.663904 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.663909 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.663914 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.663919 | orchestrator | 2025-06-03 15:38:33.663923 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-03 15:38:33.663928 | orchestrator | Tuesday 03 June 2025 15:38:30 +0000 (0:00:00.829) 0:06:28.178 ********** 2025-06-03 15:38:33.663933 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:33.663938 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:33.663943 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:33.663948 | orchestrator | 2025-06-03 15:38:33.663952 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:38:33.663957 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-03 15:38:33.663963 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-03 15:38:33.663967 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-03 15:38:33.663972 | orchestrator | 2025-06-03 15:38:33.663977 | orchestrator | 2025-06-03 15:38:33.663982 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:38:33.663987 | orchestrator | Tuesday 03 June 2025 15:38:31 +0000 (0:00:00.755) 0:06:28.933 ********** 2025-06-03 15:38:33.663992 | orchestrator | =============================================================================== 2025-06-03 15:38:33.664001 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.08s 2025-06-03 15:38:33.664006 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.14s 2025-06-03 15:38:33.664011 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.47s 2025-06-03 15:38:33.664016 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 6.30s 2025-06-03 15:38:33.664023 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.26s 2025-06-03 15:38:33.664028 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.39s 2025-06-03 15:38:33.664033 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.15s 2025-06-03 15:38:33.664038 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.97s 2025-06-03 15:38:33.664043 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.77s 2025-06-03 15:38:33.664048 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.66s 2025-06-03 15:38:33.664052 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.64s 2025-06-03 15:38:33.664057 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.62s 2025-06-03 15:38:33.664062 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.55s 2025-06-03 15:38:33.664067 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.43s 2025-06-03 15:38:33.664072 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.43s 2025-06-03 15:38:33.664076 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.31s 2025-06-03 15:38:33.664081 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.14s 2025-06-03 15:38:33.664089 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.98s 2025-06-03 15:38:33.664094 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 3.95s 2025-06-03 15:38:33.664099 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.93s 2025-06-03 15:38:36.702245 | orchestrator | 2025-06-03 15:38:36 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:38:36.703708 | orchestrator | 2025-06-03 15:38:36 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:36.705577 | orchestrator | 2025-06-03 15:38:36 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:38:36.705710 | orchestrator | 2025-06-03 15:38:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:39.746582 | orchestrator | 2025-06-03 15:38:39 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:38:39.748425 | orchestrator | 2025-06-03 15:38:39 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:39.749384 | orchestrator | 2025-06-03 15:38:39 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:38:39.749817 | orchestrator | 2025-06-03 15:38:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:42.790242 | orchestrator | 2025-06-03 15:38:42 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:38:42.792987 | orchestrator | 2025-06-03 15:38:42 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:42.795449 | orchestrator | 2025-06-03 15:38:42 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:38:42.795540 | orchestrator | 2025-06-03 15:38:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:45.830968 | orchestrator | 2025-06-03 15:38:45 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:38:45.832437 | orchestrator | 2025-06-03 15:38:45 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:45.833873 | orchestrator | 2025-06-03 15:38:45 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:38:45.833931 | orchestrator | 2025-06-03 15:38:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:48.873721 | orchestrator | 2025-06-03 15:38:48 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:38:48.876119 | orchestrator | 2025-06-03 15:38:48 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:48.877451 | orchestrator | 2025-06-03 15:38:48 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:38:48.877662 | orchestrator | 2025-06-03 15:38:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:51.914579 | orchestrator | 2025-06-03 15:38:51 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:38:51.914724 | orchestrator | 2025-06-03 15:38:51 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:51.916776 | orchestrator | 2025-06-03 15:38:51 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:38:51.916808 | orchestrator | 2025-06-03 15:38:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:54.951977 | orchestrator | 2025-06-03 15:38:54 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:38:54.954589 | orchestrator | 2025-06-03 15:38:54 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:54.954717 | orchestrator | 2025-06-03 15:38:54 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:38:54.954735 | orchestrator | 2025-06-03 15:38:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:57.995422 | orchestrator | 2025-06-03 15:38:57 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:38:57.995702 | orchestrator | 2025-06-03 15:38:57 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:38:57.996804 | orchestrator | 2025-06-03 15:38:57 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:38:57.999674 | orchestrator | 2025-06-03 15:38:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:01.068060 | orchestrator | 2025-06-03 15:39:01 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:01.070859 | orchestrator | 2025-06-03 15:39:01 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:01.075032 | orchestrator | 2025-06-03 15:39:01 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:01.076109 | orchestrator | 2025-06-03 15:39:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:04.127425 | orchestrator | 2025-06-03 15:39:04 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:04.129509 | orchestrator | 2025-06-03 15:39:04 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:04.136729 | orchestrator | 2025-06-03 15:39:04 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:04.136814 | orchestrator | 2025-06-03 15:39:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:07.184062 | orchestrator | 2025-06-03 15:39:07 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:07.187473 | orchestrator | 2025-06-03 15:39:07 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:07.190526 | orchestrator | 2025-06-03 15:39:07 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:07.190795 | orchestrator | 2025-06-03 15:39:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:10.256720 | orchestrator | 2025-06-03 15:39:10 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:10.260631 | orchestrator | 2025-06-03 15:39:10 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:10.267752 | orchestrator | 2025-06-03 15:39:10 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:10.267801 | orchestrator | 2025-06-03 15:39:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:13.320744 | orchestrator | 2025-06-03 15:39:13 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:13.323939 | orchestrator | 2025-06-03 15:39:13 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:13.325255 | orchestrator | 2025-06-03 15:39:13 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:13.326393 | orchestrator | 2025-06-03 15:39:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:16.386692 | orchestrator | 2025-06-03 15:39:16 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:16.390175 | orchestrator | 2025-06-03 15:39:16 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:16.392135 | orchestrator | 2025-06-03 15:39:16 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:16.392546 | orchestrator | 2025-06-03 15:39:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:19.444850 | orchestrator | 2025-06-03 15:39:19 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:19.445759 | orchestrator | 2025-06-03 15:39:19 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:19.447925 | orchestrator | 2025-06-03 15:39:19 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:19.447951 | orchestrator | 2025-06-03 15:39:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:22.494241 | orchestrator | 2025-06-03 15:39:22 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:22.497304 | orchestrator | 2025-06-03 15:39:22 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:22.499370 | orchestrator | 2025-06-03 15:39:22 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:22.499438 | orchestrator | 2025-06-03 15:39:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:25.565072 | orchestrator | 2025-06-03 15:39:25 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:25.569315 | orchestrator | 2025-06-03 15:39:25 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:25.575567 | orchestrator | 2025-06-03 15:39:25 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:25.575717 | orchestrator | 2025-06-03 15:39:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:28.631716 | orchestrator | 2025-06-03 15:39:28 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:28.634326 | orchestrator | 2025-06-03 15:39:28 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:28.637460 | orchestrator | 2025-06-03 15:39:28 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:28.638188 | orchestrator | 2025-06-03 15:39:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:31.683466 | orchestrator | 2025-06-03 15:39:31 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:31.684379 | orchestrator | 2025-06-03 15:39:31 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:31.686713 | orchestrator | 2025-06-03 15:39:31 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:31.686897 | orchestrator | 2025-06-03 15:39:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:34.725758 | orchestrator | 2025-06-03 15:39:34 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:34.726224 | orchestrator | 2025-06-03 15:39:34 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:34.728699 | orchestrator | 2025-06-03 15:39:34 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:34.728836 | orchestrator | 2025-06-03 15:39:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:37.771526 | orchestrator | 2025-06-03 15:39:37 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:37.773004 | orchestrator | 2025-06-03 15:39:37 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:37.775928 | orchestrator | 2025-06-03 15:39:37 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:37.775977 | orchestrator | 2025-06-03 15:39:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:40.818652 | orchestrator | 2025-06-03 15:39:40 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:40.819011 | orchestrator | 2025-06-03 15:39:40 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:40.819690 | orchestrator | 2025-06-03 15:39:40 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:40.819723 | orchestrator | 2025-06-03 15:39:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:43.874952 | orchestrator | 2025-06-03 15:39:43 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:43.877149 | orchestrator | 2025-06-03 15:39:43 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:43.879115 | orchestrator | 2025-06-03 15:39:43 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:43.879167 | orchestrator | 2025-06-03 15:39:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:46.924284 | orchestrator | 2025-06-03 15:39:46 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:46.926858 | orchestrator | 2025-06-03 15:39:46 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:46.929817 | orchestrator | 2025-06-03 15:39:46 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:46.929873 | orchestrator | 2025-06-03 15:39:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:49.984330 | orchestrator | 2025-06-03 15:39:49 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:49.985796 | orchestrator | 2025-06-03 15:39:49 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:49.987496 | orchestrator | 2025-06-03 15:39:49 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:49.987633 | orchestrator | 2025-06-03 15:39:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:53.028901 | orchestrator | 2025-06-03 15:39:53 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:53.033248 | orchestrator | 2025-06-03 15:39:53 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:53.033299 | orchestrator | 2025-06-03 15:39:53 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:53.033362 | orchestrator | 2025-06-03 15:39:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:56.074493 | orchestrator | 2025-06-03 15:39:56 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:56.076684 | orchestrator | 2025-06-03 15:39:56 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:56.079009 | orchestrator | 2025-06-03 15:39:56 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:56.079444 | orchestrator | 2025-06-03 15:39:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:59.121827 | orchestrator | 2025-06-03 15:39:59 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:39:59.122886 | orchestrator | 2025-06-03 15:39:59 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:39:59.125510 | orchestrator | 2025-06-03 15:39:59 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:39:59.125530 | orchestrator | 2025-06-03 15:39:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:02.172374 | orchestrator | 2025-06-03 15:40:02 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:02.173885 | orchestrator | 2025-06-03 15:40:02 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:40:02.175679 | orchestrator | 2025-06-03 15:40:02 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:02.175784 | orchestrator | 2025-06-03 15:40:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:05.218764 | orchestrator | 2025-06-03 15:40:05 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:05.220857 | orchestrator | 2025-06-03 15:40:05 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:40:05.222657 | orchestrator | 2025-06-03 15:40:05 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:05.222717 | orchestrator | 2025-06-03 15:40:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:08.273860 | orchestrator | 2025-06-03 15:40:08 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:08.274904 | orchestrator | 2025-06-03 15:40:08 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:40:08.276801 | orchestrator | 2025-06-03 15:40:08 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:08.277085 | orchestrator | 2025-06-03 15:40:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:11.334658 | orchestrator | 2025-06-03 15:40:11 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:11.334835 | orchestrator | 2025-06-03 15:40:11 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:40:11.334982 | orchestrator | 2025-06-03 15:40:11 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:11.335001 | orchestrator | 2025-06-03 15:40:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:14.381185 | orchestrator | 2025-06-03 15:40:14 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:14.382970 | orchestrator | 2025-06-03 15:40:14 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:40:14.384712 | orchestrator | 2025-06-03 15:40:14 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:14.384775 | orchestrator | 2025-06-03 15:40:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:17.433182 | orchestrator | 2025-06-03 15:40:17 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:17.435734 | orchestrator | 2025-06-03 15:40:17 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state STARTED 2025-06-03 15:40:17.437681 | orchestrator | 2025-06-03 15:40:17 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:17.437734 | orchestrator | 2025-06-03 15:40:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:20.487441 | orchestrator | 2025-06-03 15:40:20 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:20.491391 | orchestrator | 2025-06-03 15:40:20 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:20.498669 | orchestrator | 2025-06-03 15:40:20 | INFO  | Task 5cb13824-72d5-4b85-b008-e67536fcf76e is in state SUCCESS 2025-06-03 15:40:20.502188 | orchestrator | 2025-06-03 15:40:20.502260 | orchestrator | 2025-06-03 15:40:20.502273 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-03 15:40:20.502285 | orchestrator | 2025-06-03 15:40:20.502378 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-03 15:40:20.502391 | orchestrator | Tuesday 03 June 2025 15:29:13 +0000 (0:00:00.961) 0:00:00.962 ********** 2025-06-03 15:40:20.502410 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.502423 | orchestrator | 2025-06-03 15:40:20.502435 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-03 15:40:20.502447 | orchestrator | Tuesday 03 June 2025 15:29:14 +0000 (0:00:01.256) 0:00:02.218 ********** 2025-06-03 15:40:20.502458 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.502470 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.502481 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.502492 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.502503 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.502514 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.502524 | orchestrator | 2025-06-03 15:40:20.502535 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-03 15:40:20.502546 | orchestrator | Tuesday 03 June 2025 15:29:16 +0000 (0:00:01.576) 0:00:03.795 ********** 2025-06-03 15:40:20.502742 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.502765 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.502782 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.502795 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.502807 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.502820 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.502832 | orchestrator | 2025-06-03 15:40:20.502845 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-03 15:40:20.502857 | orchestrator | Tuesday 03 June 2025 15:29:17 +0000 (0:00:00.804) 0:00:04.599 ********** 2025-06-03 15:40:20.502869 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.502881 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.502893 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.502905 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.502917 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.502929 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.502995 | orchestrator | 2025-06-03 15:40:20.503034 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-03 15:40:20.503095 | orchestrator | Tuesday 03 June 2025 15:29:18 +0000 (0:00:01.054) 0:00:05.653 ********** 2025-06-03 15:40:20.503109 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.503121 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.503132 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.503143 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.503153 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.503164 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.503175 | orchestrator | 2025-06-03 15:40:20.503186 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-03 15:40:20.503197 | orchestrator | Tuesday 03 June 2025 15:29:19 +0000 (0:00:00.914) 0:00:06.568 ********** 2025-06-03 15:40:20.503235 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.503269 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.503353 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.503365 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.503375 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.503386 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.503397 | orchestrator | 2025-06-03 15:40:20.503408 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-03 15:40:20.503419 | orchestrator | Tuesday 03 June 2025 15:29:19 +0000 (0:00:00.672) 0:00:07.240 ********** 2025-06-03 15:40:20.503430 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.503441 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.503452 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.503462 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.503473 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.503484 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.503495 | orchestrator | 2025-06-03 15:40:20.503531 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-03 15:40:20.503542 | orchestrator | Tuesday 03 June 2025 15:29:20 +0000 (0:00:00.991) 0:00:08.231 ********** 2025-06-03 15:40:20.503661 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.503682 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.503702 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.503722 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.503743 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.503763 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.503782 | orchestrator | 2025-06-03 15:40:20.503819 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-03 15:40:20.503832 | orchestrator | Tuesday 03 June 2025 15:29:21 +0000 (0:00:00.923) 0:00:09.155 ********** 2025-06-03 15:40:20.503842 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.503853 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.503864 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.503874 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.503885 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.503896 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.503906 | orchestrator | 2025-06-03 15:40:20.503917 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-03 15:40:20.503940 | orchestrator | Tuesday 03 June 2025 15:29:22 +0000 (0:00:01.039) 0:00:10.195 ********** 2025-06-03 15:40:20.503951 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:20.503986 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:20.503998 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:20.504009 | orchestrator | 2025-06-03 15:40:20.504107 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-03 15:40:20.504150 | orchestrator | Tuesday 03 June 2025 15:29:23 +0000 (0:00:00.838) 0:00:11.033 ********** 2025-06-03 15:40:20.504162 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.504199 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.504254 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.504265 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.504276 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.504298 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.504317 | orchestrator | 2025-06-03 15:40:20.504358 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-03 15:40:20.504383 | orchestrator | Tuesday 03 June 2025 15:29:25 +0000 (0:00:01.390) 0:00:12.424 ********** 2025-06-03 15:40:20.504400 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:20.504462 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:20.504485 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:20.504504 | orchestrator | 2025-06-03 15:40:20.504524 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-03 15:40:20.504591 | orchestrator | Tuesday 03 June 2025 15:29:28 +0000 (0:00:02.947) 0:00:15.371 ********** 2025-06-03 15:40:20.504674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:20.504686 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:20.504697 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:20.504707 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.504828 | orchestrator | 2025-06-03 15:40:20.504851 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-03 15:40:20.504897 | orchestrator | Tuesday 03 June 2025 15:29:28 +0000 (0:00:00.705) 0:00:16.077 ********** 2025-06-03 15:40:20.505012 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.505027 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.505038 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.505049 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.505060 | orchestrator | 2025-06-03 15:40:20.505071 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-03 15:40:20.505082 | orchestrator | Tuesday 03 June 2025 15:29:29 +0000 (0:00:01.065) 0:00:17.143 ********** 2025-06-03 15:40:20.505119 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.505135 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.505146 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.505168 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.505180 | orchestrator | 2025-06-03 15:40:20.505211 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-03 15:40:20.505295 | orchestrator | Tuesday 03 June 2025 15:29:30 +0000 (0:00:00.559) 0:00:17.702 ********** 2025-06-03 15:40:20.505318 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-03 15:29:25.676559', 'end': '2025-06-03 15:29:25.940778', 'delta': '0:00:00.264219', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.505346 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-03 15:29:26.713515', 'end': '2025-06-03 15:29:26.991622', 'delta': '0:00:00.278107', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.505369 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-03 15:29:27.551653', 'end': '2025-06-03 15:29:27.840144', 'delta': '0:00:00.288491', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.505461 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.505613 | orchestrator | 2025-06-03 15:40:20.505633 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-03 15:40:20.505644 | orchestrator | Tuesday 03 June 2025 15:29:30 +0000 (0:00:00.326) 0:00:18.028 ********** 2025-06-03 15:40:20.505656 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.505711 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.505731 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.505741 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.505751 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.505763 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.505780 | orchestrator | 2025-06-03 15:40:20.505795 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-03 15:40:20.505811 | orchestrator | Tuesday 03 June 2025 15:29:32 +0000 (0:00:01.623) 0:00:19.652 ********** 2025-06-03 15:40:20.505829 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.505846 | orchestrator | 2025-06-03 15:40:20.505863 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-03 15:40:20.505875 | orchestrator | Tuesday 03 June 2025 15:29:33 +0000 (0:00:00.791) 0:00:20.443 ********** 2025-06-03 15:40:20.505885 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.505895 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.505905 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.505924 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.505934 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.505944 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.505953 | orchestrator | 2025-06-03 15:40:20.505963 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-03 15:40:20.505972 | orchestrator | Tuesday 03 June 2025 15:29:34 +0000 (0:00:01.354) 0:00:21.797 ********** 2025-06-03 15:40:20.505982 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.505991 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.506000 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.506010 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.506064 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.506075 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.506084 | orchestrator | 2025-06-03 15:40:20.506094 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-03 15:40:20.506104 | orchestrator | Tuesday 03 June 2025 15:29:36 +0000 (0:00:01.795) 0:00:23.593 ********** 2025-06-03 15:40:20.506114 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.506123 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.506133 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.506143 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.506152 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.506162 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.506171 | orchestrator | 2025-06-03 15:40:20.506181 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-03 15:40:20.506191 | orchestrator | Tuesday 03 June 2025 15:29:37 +0000 (0:00:00.951) 0:00:24.544 ********** 2025-06-03 15:40:20.506207 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.506217 | orchestrator | 2025-06-03 15:40:20.506226 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-03 15:40:20.506236 | orchestrator | Tuesday 03 June 2025 15:29:37 +0000 (0:00:00.155) 0:00:24.700 ********** 2025-06-03 15:40:20.506246 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.506256 | orchestrator | 2025-06-03 15:40:20.506265 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-03 15:40:20.506275 | orchestrator | Tuesday 03 June 2025 15:29:37 +0000 (0:00:00.207) 0:00:24.907 ********** 2025-06-03 15:40:20.506285 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.506295 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.506304 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.506314 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.506324 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.506333 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.506343 | orchestrator | 2025-06-03 15:40:20.506353 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-03 15:40:20.506373 | orchestrator | Tuesday 03 June 2025 15:29:38 +0000 (0:00:00.797) 0:00:25.704 ********** 2025-06-03 15:40:20.506383 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.506393 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.506402 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.506412 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.506421 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.506431 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.506440 | orchestrator | 2025-06-03 15:40:20.506450 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-03 15:40:20.506460 | orchestrator | Tuesday 03 June 2025 15:29:39 +0000 (0:00:01.446) 0:00:27.151 ********** 2025-06-03 15:40:20.506470 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.506479 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.506489 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.506498 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.506508 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.506517 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.506533 | orchestrator | 2025-06-03 15:40:20.506543 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-03 15:40:20.506570 | orchestrator | Tuesday 03 June 2025 15:29:40 +0000 (0:00:00.907) 0:00:28.059 ********** 2025-06-03 15:40:20.506582 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.506591 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.506601 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.506610 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.506619 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.506629 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.506638 | orchestrator | 2025-06-03 15:40:20.506648 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-03 15:40:20.506658 | orchestrator | Tuesday 03 June 2025 15:29:41 +0000 (0:00:00.794) 0:00:28.853 ********** 2025-06-03 15:40:20.506667 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.506677 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.506686 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.506696 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.506705 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.506715 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.506724 | orchestrator | 2025-06-03 15:40:20.506734 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-03 15:40:20.506744 | orchestrator | Tuesday 03 June 2025 15:29:42 +0000 (0:00:00.646) 0:00:29.499 ********** 2025-06-03 15:40:20.506753 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.506763 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.506772 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.506782 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.506792 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.506803 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.506819 | orchestrator | 2025-06-03 15:40:20.506837 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-03 15:40:20.506853 | orchestrator | Tuesday 03 June 2025 15:29:42 +0000 (0:00:00.678) 0:00:30.178 ********** 2025-06-03 15:40:20.506871 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.506888 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.506905 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.506921 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.506931 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.506941 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.506950 | orchestrator | 2025-06-03 15:40:20.506960 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-03 15:40:20.506969 | orchestrator | Tuesday 03 June 2025 15:29:43 +0000 (0:00:00.774) 0:00:30.953 ********** 2025-06-03 15:40:20.506980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.506990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a', 'scsi-SQEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part1', 'scsi-SQEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part14', 'scsi-SQEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part15', 'scsi-SQEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part16', 'scsi-SQEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.507114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.507125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747', 'scsi-SQEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part1', 'scsi-SQEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part14', 'scsi-SQEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part15', 'scsi-SQEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part16', 'scsi-SQEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.507237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.507248 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.507258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.507364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a', 'scsi-SQEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part1', 'scsi-SQEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part14', 'scsi-SQEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part15', 'scsi-SQEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part16', 'scsi-SQEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.508580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.508665 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.508690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a5276575--f764--5428--894d--d125091c496f-osd--block--a5276575--f764--5428--894d--d125091c496f', 'dm-uuid-LVM-nRGGPaStpf29XH9PEFiJRgvLNzQzUF0gerYnP8cTcH9vwrCe8WxdOsBU1eSIbIrQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a443cc3--e60d--5588--869b--39e93dfe07d6-osd--block--6a443cc3--e60d--5588--869b--39e93dfe07d6', 'dm-uuid-LVM-IJupnY7jw4zZIhHRi8XfW4ylftnbUxEodz46P8IGX2f1J5WOOoqYFRFb6vaoDnJW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508714 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.508725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e839e97--cc3d--5431--ae91--f94b997cade9-osd--block--8e839e97--cc3d--5431--ae91--f94b997cade9', 'dm-uuid-LVM-gYcuttOc0Nsrc1gF55i0dQSUdy23zEIrf1Rj8ySrnDtugXtGF8mEf160mWrRLyjO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1191cd60--4b8c--5454--8e42--9818af3c2595-osd--block--1191cd60--4b8c--5454--8e42--9818af3c2595', 'dm-uuid-LVM-NucV1Eabq1nHybqCjjD5eQKyszZctw33gCYxE9GWcC0Qbc0ALYU7xKpyegXBvmIQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.508992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part1', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part14', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part15', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part16', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a5276575--f764--5428--894d--d125091c496f-osd--block--a5276575--f764--5428--894d--d125091c496f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8xz9pM-8Jia-cKtn-lqgw-8Ibt-cWui-cV2SXp', 'scsi-0QEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e', 'scsi-SQEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6a443cc3--e60d--5588--869b--39e93dfe07d6-osd--block--6a443cc3--e60d--5588--869b--39e93dfe07d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Hwp9sC-fQdV-TeI0-ezcd-CfFv-VmPG-5DRkFi', 'scsi-0QEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2', 'scsi-SQEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5', 'scsi-SQEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509132 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.509154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part1', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part14', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part15', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part16', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8e839e97--cc3d--5431--ae91--f94b997cade9-osd--block--8e839e97--cc3d--5431--ae91--f94b997cade9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z5SOHc-LMIV-Hnzh-9Kru-F05l-1qWm-9j1z7i', 'scsi-0QEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81', 'scsi-SQEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1191cd60--4b8c--5454--8e42--9818af3c2595-osd--block--1191cd60--4b8c--5454--8e42--9818af3c2595'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6GtL0i-ZCvM-RFD1-a3yQ-P1i5-4296-CIdOY0', 'scsi-0QEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35', 'scsi-SQEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144', 'scsi-SQEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53b632c4--9781--517b--ad8e--3b37c9789a01-osd--block--53b632c4--9781--517b--ad8e--3b37c9789a01', 'dm-uuid-LVM-UtCBhN7ekwDglfkwPU5DbbuGlpfvVLSwBka3LgpTl8Lccw3S0l125OrhR4Kqu1yj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509240 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.509251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5-osd--block--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5', 'dm-uuid-LVM-p6jyVjBaN36kCqNbczwHStJEw3wpSqPf2EJHcEOZJK3L7OfNvBvO6tOL8SFtY98W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:20.509387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part1', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part14', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part15', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part16', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--53b632c4--9781--517b--ad8e--3b37c9789a01-osd--block--53b632c4--9781--517b--ad8e--3b37c9789a01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RcY2qy-IZnS-duNN-Nt07-lHNx-kgon-LcaO84', 'scsi-0QEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9', 'scsi-SQEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5-osd--block--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QKTKdO-gmhA-eDdm-Bbme-bRgB-KIEK-fW57I9', 'scsi-0QEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057', 'scsi-SQEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447', 'scsi-SQEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:20.509464 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.509475 | orchestrator | 2025-06-03 15:40:20.509487 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-03 15:40:20.509498 | orchestrator | Tuesday 03 June 2025 15:29:46 +0000 (0:00:02.395) 0:00:33.349 ********** 2025-06-03 15:40:20.509510 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509521 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509537 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509548 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509588 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509612 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509630 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509641 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509658 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a', 'scsi-SQEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part1', 'scsi-SQEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part14', 'scsi-SQEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part15', 'scsi-SQEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part16', 'scsi-SQEMU_QEMU_HARDDISK_daa37257-efba-4fc6-9313-1e4cfc74b56a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509679 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509690 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.509701 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509711 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509728 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509739 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509750 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509764 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509781 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509793 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509809 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509820 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509831 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509841 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509861 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509900 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509930 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747', 'scsi-SQEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part1', 'scsi-SQEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part14', 'scsi-SQEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part15', 'scsi-SQEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part16', 'scsi-SQEMU_QEMU_HARDDISK_e15e68c1-db55-4e4e-993f-f3c7420d4747-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509948 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509972 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.509991 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510108 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a', 'scsi-SQEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part1', 'scsi-SQEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part14', 'scsi-SQEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part15', 'scsi-SQEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part16', 'scsi-SQEMU_QEMU_HARDDISK_55c4c1ce-4a0d-4db2-bd1e-96ac1249648a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510134 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510146 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.510157 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.510178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a5276575--f764--5428--894d--d125091c496f-osd--block--a5276575--f764--5428--894d--d125091c496f', 'dm-uuid-LVM-nRGGPaStpf29XH9PEFiJRgvLNzQzUF0gerYnP8cTcH9vwrCe8WxdOsBU1eSIbIrQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a443cc3--e60d--5588--869b--39e93dfe07d6-osd--block--6a443cc3--e60d--5588--869b--39e93dfe07d6', 'dm-uuid-LVM-IJupnY7jw4zZIhHRi8XfW4ylftnbUxEodz46P8IGX2f1J5WOOoqYFRFb6vaoDnJW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510224 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e839e97--cc3d--5431--ae91--f94b997cade9-osd--block--8e839e97--cc3d--5431--ae91--f94b997cade9', 'dm-uuid-LVM-gYcuttOc0Nsrc1gF55i0dQSUdy23zEIrf1Rj8ySrnDtugXtGF8mEf160mWrRLyjO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510234 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1191cd60--4b8c--5454--8e42--9818af3c2595-osd--block--1191cd60--4b8c--5454--8e42--9818af3c2595', 'dm-uuid-LVM-NucV1Eabq1nHybqCjjD5eQKyszZctw33gCYxE9GWcC0Qbc0ALYU7xKpyegXBvmIQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510293 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510314 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510325 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510335 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510349 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510366 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510382 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510393 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510403 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510439 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53b632c4--9781--517b--ad8e--3b37c9789a01-osd--block--53b632c4--9781--517b--ad8e--3b37c9789a01', 'dm-uuid-LVM-UtCBhN7ekwDglfkwPU5DbbuGlpfvVLSwBka3LgpTl8Lccw3S0l125OrhR4Kqu1yj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510643 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part1', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part14', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part15', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part16', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5-osd--block--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5', 'dm-uuid-LVM-p6jyVjBaN36kCqNbczwHStJEw3wpSqPf2EJHcEOZJK3L7OfNvBvO6tOL8SFtY98W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510750 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a5276575--f764--5428--894d--d125091c496f-osd--block--a5276575--f764--5428--894d--d125091c496f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8xz9pM-8Jia-cKtn-lqgw-8Ibt-cWui-cV2SXp', 'scsi-0QEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e', 'scsi-SQEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510770 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6a443cc3--e60d--5588--869b--39e93dfe07d6-osd--block--6a443cc3--e60d--5588--869b--39e93dfe07d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Hwp9sC-fQdV-TeI0-ezcd-CfFv-VmPG-5DRkFi', 'scsi-0QEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2', 'scsi-SQEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5', 'scsi-SQEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510815 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510848 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part1', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part14', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part15', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part16', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510859 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.510869 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8e839e97--cc3d--5431--ae91--f94b997cade9-osd--block--8e839e97--cc3d--5431--ae91--f94b997cade9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z5SOHc-LMIV-Hnzh-9Kru-F05l-1qWm-9j1z7i', 'scsi-0QEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81', 'scsi-SQEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1191cd60--4b8c--5454--8e42--9818af3c2595-osd--block--1191cd60--4b8c--5454--8e42--9818af3c2595'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6GtL0i-ZCvM-RFD1-a3yQ-P1i5-4296-CIdOY0', 'scsi-0QEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35', 'scsi-SQEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510931 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144', 'scsi-SQEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.510972 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.510988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.511000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.511010 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.511021 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.511047 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part1', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part14', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part15', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part16', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.511066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--53b632c4--9781--517b--ad8e--3b37c9789a01-osd--block--53b632c4--9781--517b--ad8e--3b37c9789a01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RcY2qy-IZnS-duNN-Nt07-lHNx-kgon-LcaO84', 'scsi-0QEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9', 'scsi-SQEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.511077 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5-osd--block--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QKTKdO-gmhA-eDdm-Bbme-bRgB-KIEK-fW57I9', 'scsi-0QEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057', 'scsi-SQEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.511088 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447', 'scsi-SQEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.511107 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:20.511117 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.511128 | orchestrator | 2025-06-03 15:40:20.511138 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-03 15:40:20.511149 | orchestrator | Tuesday 03 June 2025 15:29:47 +0000 (0:00:01.609) 0:00:34.959 ********** 2025-06-03 15:40:20.511159 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.511169 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.511179 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.511194 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.511206 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.511216 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.511227 | orchestrator | 2025-06-03 15:40:20.511238 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-03 15:40:20.511249 | orchestrator | Tuesday 03 June 2025 15:29:49 +0000 (0:00:01.543) 0:00:36.502 ********** 2025-06-03 15:40:20.511261 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.511271 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.511282 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.511293 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.511304 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.511315 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.511326 | orchestrator | 2025-06-03 15:40:20.511337 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-03 15:40:20.511349 | orchestrator | Tuesday 03 June 2025 15:29:49 +0000 (0:00:00.794) 0:00:37.297 ********** 2025-06-03 15:40:20.511360 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.511374 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.511391 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.511408 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.511425 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.511441 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.511458 | orchestrator | 2025-06-03 15:40:20.511476 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-03 15:40:20.511494 | orchestrator | Tuesday 03 June 2025 15:29:50 +0000 (0:00:00.952) 0:00:38.249 ********** 2025-06-03 15:40:20.511513 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.511530 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.511544 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.511591 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.511619 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.511635 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.511650 | orchestrator | 2025-06-03 15:40:20.511666 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-03 15:40:20.511680 | orchestrator | Tuesday 03 June 2025 15:29:51 +0000 (0:00:00.760) 0:00:39.010 ********** 2025-06-03 15:40:20.511695 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.511711 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.511726 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.511742 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.511758 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.511774 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.511805 | orchestrator | 2025-06-03 15:40:20.511823 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-03 15:40:20.511839 | orchestrator | Tuesday 03 June 2025 15:29:52 +0000 (0:00:01.134) 0:00:40.144 ********** 2025-06-03 15:40:20.511855 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.511872 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.511887 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.511905 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.511922 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.511939 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.511952 | orchestrator | 2025-06-03 15:40:20.511962 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-03 15:40:20.511972 | orchestrator | Tuesday 03 June 2025 15:29:53 +0000 (0:00:00.828) 0:00:40.973 ********** 2025-06-03 15:40:20.511982 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:20.511991 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-03 15:40:20.512001 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-03 15:40:20.512010 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-03 15:40:20.512020 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-03 15:40:20.512029 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-03 15:40:20.512039 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-03 15:40:20.512049 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-03 15:40:20.512058 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-03 15:40:20.512068 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-03 15:40:20.512078 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-03 15:40:20.512087 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-03 15:40:20.512097 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-03 15:40:20.512106 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-03 15:40:20.512115 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-03 15:40:20.512125 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-03 15:40:20.512134 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-03 15:40:20.512144 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-03 15:40:20.512153 | orchestrator | 2025-06-03 15:40:20.512163 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-03 15:40:20.512172 | orchestrator | Tuesday 03 June 2025 15:29:57 +0000 (0:00:03.805) 0:00:44.779 ********** 2025-06-03 15:40:20.512189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:20.512199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:20.512209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:20.512219 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.512228 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-03 15:40:20.512238 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-03 15:40:20.512247 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-03 15:40:20.512257 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-03 15:40:20.512266 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-03 15:40:20.512275 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-03 15:40:20.512285 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.512295 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-03 15:40:20.512315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-03 15:40:20.512325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-03 15:40:20.512334 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.512344 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-03 15:40:20.512361 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-03 15:40:20.512371 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-03 15:40:20.512381 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.512390 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.512400 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-03 15:40:20.512409 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-03 15:40:20.512418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-03 15:40:20.512428 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.512437 | orchestrator | 2025-06-03 15:40:20.512447 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-03 15:40:20.512457 | orchestrator | Tuesday 03 June 2025 15:29:58 +0000 (0:00:00.942) 0:00:45.721 ********** 2025-06-03 15:40:20.512467 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.512476 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.512486 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.512496 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.512505 | orchestrator | 2025-06-03 15:40:20.512515 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-03 15:40:20.512526 | orchestrator | Tuesday 03 June 2025 15:30:00 +0000 (0:00:01.609) 0:00:47.331 ********** 2025-06-03 15:40:20.512535 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.512545 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.512696 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.512727 | orchestrator | 2025-06-03 15:40:20.512737 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-03 15:40:20.512748 | orchestrator | Tuesday 03 June 2025 15:30:00 +0000 (0:00:00.427) 0:00:47.758 ********** 2025-06-03 15:40:20.512757 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.512767 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.512777 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.512787 | orchestrator | 2025-06-03 15:40:20.512796 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-03 15:40:20.512805 | orchestrator | Tuesday 03 June 2025 15:30:00 +0000 (0:00:00.518) 0:00:48.277 ********** 2025-06-03 15:40:20.512814 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.512822 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.512831 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.512839 | orchestrator | 2025-06-03 15:40:20.512848 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-03 15:40:20.512856 | orchestrator | Tuesday 03 June 2025 15:30:01 +0000 (0:00:00.434) 0:00:48.711 ********** 2025-06-03 15:40:20.512865 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.512874 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.512883 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.512891 | orchestrator | 2025-06-03 15:40:20.512900 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-03 15:40:20.512908 | orchestrator | Tuesday 03 June 2025 15:30:02 +0000 (0:00:00.699) 0:00:49.410 ********** 2025-06-03 15:40:20.512917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.512926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.512934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.512943 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.512951 | orchestrator | 2025-06-03 15:40:20.512960 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-03 15:40:20.512968 | orchestrator | Tuesday 03 June 2025 15:30:02 +0000 (0:00:00.737) 0:00:50.148 ********** 2025-06-03 15:40:20.512977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.512996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.513005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.513013 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.513022 | orchestrator | 2025-06-03 15:40:20.513030 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-03 15:40:20.513039 | orchestrator | Tuesday 03 June 2025 15:30:03 +0000 (0:00:00.629) 0:00:50.778 ********** 2025-06-03 15:40:20.513048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.513056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.513065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.513074 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.513082 | orchestrator | 2025-06-03 15:40:20.513097 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-03 15:40:20.513106 | orchestrator | Tuesday 03 June 2025 15:30:04 +0000 (0:00:00.832) 0:00:51.610 ********** 2025-06-03 15:40:20.513115 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.513124 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.513132 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.513141 | orchestrator | 2025-06-03 15:40:20.513150 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-03 15:40:20.513159 | orchestrator | Tuesday 03 June 2025 15:30:04 +0000 (0:00:00.605) 0:00:52.216 ********** 2025-06-03 15:40:20.513168 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-03 15:40:20.513176 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-03 15:40:20.513185 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-03 15:40:20.513193 | orchestrator | 2025-06-03 15:40:20.513202 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-03 15:40:20.513211 | orchestrator | Tuesday 03 June 2025 15:30:06 +0000 (0:00:01.338) 0:00:53.555 ********** 2025-06-03 15:40:20.513231 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:20.513241 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:20.513250 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:20.513258 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-03 15:40:20.513267 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-03 15:40:20.513275 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-03 15:40:20.513284 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-03 15:40:20.513292 | orchestrator | 2025-06-03 15:40:20.513301 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-03 15:40:20.513310 | orchestrator | Tuesday 03 June 2025 15:30:07 +0000 (0:00:01.383) 0:00:54.939 ********** 2025-06-03 15:40:20.513318 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:20.513327 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:20.513335 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:20.513344 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-03 15:40:20.513353 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-03 15:40:20.513361 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-03 15:40:20.513370 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-03 15:40:20.513378 | orchestrator | 2025-06-03 15:40:20.513387 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:20.513396 | orchestrator | Tuesday 03 June 2025 15:30:09 +0000 (0:00:02.148) 0:00:57.088 ********** 2025-06-03 15:40:20.513411 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.513421 | orchestrator | 2025-06-03 15:40:20.513430 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:20.513439 | orchestrator | Tuesday 03 June 2025 15:30:11 +0000 (0:00:01.305) 0:00:58.393 ********** 2025-06-03 15:40:20.513447 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.513456 | orchestrator | 2025-06-03 15:40:20.513465 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:20.513473 | orchestrator | Tuesday 03 June 2025 15:30:12 +0000 (0:00:01.129) 0:00:59.523 ********** 2025-06-03 15:40:20.513482 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.513491 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.513499 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.513508 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.513517 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.513525 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.513534 | orchestrator | 2025-06-03 15:40:20.513543 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:20.513568 | orchestrator | Tuesday 03 June 2025 15:30:13 +0000 (0:00:00.985) 0:01:00.509 ********** 2025-06-03 15:40:20.513577 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.513586 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.513595 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.513603 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.513612 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.513620 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.513629 | orchestrator | 2025-06-03 15:40:20.513638 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:20.513646 | orchestrator | Tuesday 03 June 2025 15:30:15 +0000 (0:00:01.878) 0:01:02.387 ********** 2025-06-03 15:40:20.513655 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.513663 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.513672 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.513681 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.513689 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.513698 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.513706 | orchestrator | 2025-06-03 15:40:20.513715 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:20.513724 | orchestrator | Tuesday 03 June 2025 15:30:16 +0000 (0:00:01.405) 0:01:03.792 ********** 2025-06-03 15:40:20.513733 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.513745 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.513754 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.513763 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.513771 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.513780 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.513789 | orchestrator | 2025-06-03 15:40:20.513797 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:20.513806 | orchestrator | Tuesday 03 June 2025 15:30:17 +0000 (0:00:01.211) 0:01:05.003 ********** 2025-06-03 15:40:20.513815 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.513823 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.513832 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.513841 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.513849 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.513858 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.513866 | orchestrator | 2025-06-03 15:40:20.513875 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:20.513884 | orchestrator | Tuesday 03 June 2025 15:30:19 +0000 (0:00:01.473) 0:01:06.477 ********** 2025-06-03 15:40:20.513904 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.513914 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.513922 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.513931 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.513939 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.513948 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.513956 | orchestrator | 2025-06-03 15:40:20.513965 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:20.513974 | orchestrator | Tuesday 03 June 2025 15:30:19 +0000 (0:00:00.818) 0:01:07.296 ********** 2025-06-03 15:40:20.513983 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.513991 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.514000 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.514008 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.514086 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.514095 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.514104 | orchestrator | 2025-06-03 15:40:20.514113 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:20.514121 | orchestrator | Tuesday 03 June 2025 15:30:20 +0000 (0:00:00.908) 0:01:08.204 ********** 2025-06-03 15:40:20.514130 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.514139 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.514147 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.514156 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.514164 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.514173 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.514181 | orchestrator | 2025-06-03 15:40:20.514190 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:20.514199 | orchestrator | Tuesday 03 June 2025 15:30:22 +0000 (0:00:01.603) 0:01:09.808 ********** 2025-06-03 15:40:20.514208 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.514217 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.514225 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.514234 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.514243 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.514251 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.514260 | orchestrator | 2025-06-03 15:40:20.514269 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:20.514277 | orchestrator | Tuesday 03 June 2025 15:30:24 +0000 (0:00:01.549) 0:01:11.357 ********** 2025-06-03 15:40:20.514286 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.514295 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.514303 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.514328 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.514337 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.514345 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.514354 | orchestrator | 2025-06-03 15:40:20.514362 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:20.514371 | orchestrator | Tuesday 03 June 2025 15:30:24 +0000 (0:00:00.464) 0:01:11.821 ********** 2025-06-03 15:40:20.514380 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.514388 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.514397 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.514406 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.514415 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.514423 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.514432 | orchestrator | 2025-06-03 15:40:20.514441 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:20.514449 | orchestrator | Tuesday 03 June 2025 15:30:25 +0000 (0:00:00.705) 0:01:12.527 ********** 2025-06-03 15:40:20.514458 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.514467 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.514475 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.514484 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.514493 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.514508 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.514517 | orchestrator | 2025-06-03 15:40:20.514526 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:20.514534 | orchestrator | Tuesday 03 June 2025 15:30:25 +0000 (0:00:00.541) 0:01:13.069 ********** 2025-06-03 15:40:20.514543 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.514564 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.514573 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.514582 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.514590 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.514599 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.514608 | orchestrator | 2025-06-03 15:40:20.514616 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:20.514625 | orchestrator | Tuesday 03 June 2025 15:30:26 +0000 (0:00:00.687) 0:01:13.756 ********** 2025-06-03 15:40:20.514634 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.514643 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.514651 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.514660 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.514668 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.514677 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.514685 | orchestrator | 2025-06-03 15:40:20.514694 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:20.514703 | orchestrator | Tuesday 03 June 2025 15:30:26 +0000 (0:00:00.532) 0:01:14.289 ********** 2025-06-03 15:40:20.514716 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.514725 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.514734 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.514742 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.514751 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.514759 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.514768 | orchestrator | 2025-06-03 15:40:20.514776 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:20.514785 | orchestrator | Tuesday 03 June 2025 15:30:27 +0000 (0:00:00.635) 0:01:14.925 ********** 2025-06-03 15:40:20.514794 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.514803 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.514811 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.514820 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.514828 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.514837 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.514846 | orchestrator | 2025-06-03 15:40:20.514855 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:20.514877 | orchestrator | Tuesday 03 June 2025 15:30:28 +0000 (0:00:00.481) 0:01:15.406 ********** 2025-06-03 15:40:20.514886 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.514895 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.514904 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.514912 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.514950 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.514968 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.514988 | orchestrator | 2025-06-03 15:40:20.515003 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:20.515017 | orchestrator | Tuesday 03 June 2025 15:30:28 +0000 (0:00:00.675) 0:01:16.082 ********** 2025-06-03 15:40:20.515030 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.515045 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.515059 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.515073 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.515088 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.515102 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.515118 | orchestrator | 2025-06-03 15:40:20.515133 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:20.515148 | orchestrator | Tuesday 03 June 2025 15:30:29 +0000 (0:00:00.534) 0:01:16.617 ********** 2025-06-03 15:40:20.515166 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.515175 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.515184 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.515273 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.515293 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.515309 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.515324 | orchestrator | 2025-06-03 15:40:20.515340 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-03 15:40:20.515358 | orchestrator | Tuesday 03 June 2025 15:30:30 +0000 (0:00:01.051) 0:01:17.668 ********** 2025-06-03 15:40:20.515374 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.515390 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.515400 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.515409 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.515419 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.515428 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.515438 | orchestrator | 2025-06-03 15:40:20.515447 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-03 15:40:20.515456 | orchestrator | Tuesday 03 June 2025 15:30:31 +0000 (0:00:01.455) 0:01:19.123 ********** 2025-06-03 15:40:20.515466 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.515476 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.515485 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.515498 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.515514 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.515530 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.515544 | orchestrator | 2025-06-03 15:40:20.515582 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-03 15:40:20.515597 | orchestrator | Tuesday 03 June 2025 15:30:34 +0000 (0:00:02.813) 0:01:21.937 ********** 2025-06-03 15:40:20.515612 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.515627 | orchestrator | 2025-06-03 15:40:20.515641 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-03 15:40:20.515657 | orchestrator | Tuesday 03 June 2025 15:30:35 +0000 (0:00:00.992) 0:01:22.930 ********** 2025-06-03 15:40:20.515671 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.515687 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.515703 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.515719 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.515735 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.515751 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.515768 | orchestrator | 2025-06-03 15:40:20.515785 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-03 15:40:20.515801 | orchestrator | Tuesday 03 June 2025 15:30:36 +0000 (0:00:00.656) 0:01:23.586 ********** 2025-06-03 15:40:20.515819 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.515835 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.515853 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.515869 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.515885 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.515901 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.515917 | orchestrator | 2025-06-03 15:40:20.515935 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-03 15:40:20.515951 | orchestrator | Tuesday 03 June 2025 15:30:36 +0000 (0:00:00.514) 0:01:24.101 ********** 2025-06-03 15:40:20.515969 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:20.515986 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:20.516003 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:20.516043 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:20.516070 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:20.516088 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:20.516106 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:20.516123 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:20.516141 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:20.516158 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:20.516176 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:20.516193 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:20.516210 | orchestrator | 2025-06-03 15:40:20.516240 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-03 15:40:20.516258 | orchestrator | Tuesday 03 June 2025 15:30:38 +0000 (0:00:01.351) 0:01:25.452 ********** 2025-06-03 15:40:20.516275 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.516291 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.516309 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.516325 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.516342 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.516359 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.516376 | orchestrator | 2025-06-03 15:40:20.516393 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-03 15:40:20.516410 | orchestrator | Tuesday 03 June 2025 15:30:39 +0000 (0:00:00.908) 0:01:26.360 ********** 2025-06-03 15:40:20.516427 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.516445 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.516461 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.516477 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.516495 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.516511 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.516547 | orchestrator | 2025-06-03 15:40:20.516596 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-03 15:40:20.516613 | orchestrator | Tuesday 03 June 2025 15:30:39 +0000 (0:00:00.745) 0:01:27.105 ********** 2025-06-03 15:40:20.516631 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.516647 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.516663 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.516681 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.516698 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.516714 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.516730 | orchestrator | 2025-06-03 15:40:20.516746 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-03 15:40:20.516762 | orchestrator | Tuesday 03 June 2025 15:30:40 +0000 (0:00:00.513) 0:01:27.619 ********** 2025-06-03 15:40:20.516779 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.516796 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.516813 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.516829 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.516846 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.516863 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.516879 | orchestrator | 2025-06-03 15:40:20.516896 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-03 15:40:20.516914 | orchestrator | Tuesday 03 June 2025 15:30:41 +0000 (0:00:00.801) 0:01:28.420 ********** 2025-06-03 15:40:20.516930 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.516962 | orchestrator | 2025-06-03 15:40:20.516978 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-03 15:40:20.516993 | orchestrator | Tuesday 03 June 2025 15:30:42 +0000 (0:00:01.053) 0:01:29.474 ********** 2025-06-03 15:40:20.517008 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.517024 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.517039 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.517055 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.517070 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.517088 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.517103 | orchestrator | 2025-06-03 15:40:20.517119 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-03 15:40:20.517136 | orchestrator | Tuesday 03 June 2025 15:31:49 +0000 (0:01:07.056) 0:02:36.530 ********** 2025-06-03 15:40:20.517152 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:20.517170 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:20.517186 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:20.517204 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.517220 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:20.517236 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:20.517253 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:20.517269 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.517287 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:20.517303 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:20.517320 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:20.517336 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.517352 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:20.517377 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:20.517395 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:20.517412 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.517428 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:20.517445 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:20.517461 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:20.517478 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.517495 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:20.517512 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:20.517530 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:20.517582 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.517602 | orchestrator | 2025-06-03 15:40:20.517619 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-03 15:40:20.517636 | orchestrator | Tuesday 03 June 2025 15:31:50 +0000 (0:00:00.932) 0:02:37.463 ********** 2025-06-03 15:40:20.517654 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.517671 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.517687 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.517705 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.517721 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.517738 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.517754 | orchestrator | 2025-06-03 15:40:20.517771 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-03 15:40:20.517799 | orchestrator | Tuesday 03 June 2025 15:31:50 +0000 (0:00:00.655) 0:02:38.118 ********** 2025-06-03 15:40:20.517816 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.517833 | orchestrator | 2025-06-03 15:40:20.517849 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-03 15:40:20.517866 | orchestrator | Tuesday 03 June 2025 15:31:50 +0000 (0:00:00.170) 0:02:38.289 ********** 2025-06-03 15:40:20.517882 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.517899 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.517915 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.517932 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.517949 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.517965 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.517981 | orchestrator | 2025-06-03 15:40:20.517999 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-03 15:40:20.518014 | orchestrator | Tuesday 03 June 2025 15:31:52 +0000 (0:00:01.042) 0:02:39.332 ********** 2025-06-03 15:40:20.518080 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.518099 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.518115 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.518133 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.518151 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.518168 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.518187 | orchestrator | 2025-06-03 15:40:20.518204 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-03 15:40:20.518221 | orchestrator | Tuesday 03 June 2025 15:31:52 +0000 (0:00:00.746) 0:02:40.078 ********** 2025-06-03 15:40:20.518239 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.518257 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.518273 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.518291 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.518309 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.518326 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.518344 | orchestrator | 2025-06-03 15:40:20.518362 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-03 15:40:20.518380 | orchestrator | Tuesday 03 June 2025 15:31:53 +0000 (0:00:01.002) 0:02:41.081 ********** 2025-06-03 15:40:20.518398 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.518498 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.518519 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.518536 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.518571 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.518588 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.518605 | orchestrator | 2025-06-03 15:40:20.518621 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-03 15:40:20.518638 | orchestrator | Tuesday 03 June 2025 15:31:56 +0000 (0:00:02.553) 0:02:43.634 ********** 2025-06-03 15:40:20.518654 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.518672 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.518689 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.518705 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.518721 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.518738 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.518756 | orchestrator | 2025-06-03 15:40:20.518773 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-03 15:40:20.518791 | orchestrator | Tuesday 03 June 2025 15:31:57 +0000 (0:00:01.026) 0:02:44.660 ********** 2025-06-03 15:40:20.518809 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.518827 | orchestrator | 2025-06-03 15:40:20.518845 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-03 15:40:20.518861 | orchestrator | Tuesday 03 June 2025 15:31:58 +0000 (0:00:01.422) 0:02:46.083 ********** 2025-06-03 15:40:20.518892 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.518909 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.518925 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.518943 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.518960 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.518975 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.518993 | orchestrator | 2025-06-03 15:40:20.519008 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-03 15:40:20.519030 | orchestrator | Tuesday 03 June 2025 15:31:59 +0000 (0:00:00.765) 0:02:46.848 ********** 2025-06-03 15:40:20.519046 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.519060 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.519076 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.519091 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.519107 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.519122 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.519137 | orchestrator | 2025-06-03 15:40:20.519152 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-03 15:40:20.519169 | orchestrator | Tuesday 03 June 2025 15:32:00 +0000 (0:00:01.111) 0:02:47.960 ********** 2025-06-03 15:40:20.519183 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.519198 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.519213 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.519227 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.519241 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.519256 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.519270 | orchestrator | 2025-06-03 15:40:20.519285 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-03 15:40:20.519314 | orchestrator | Tuesday 03 June 2025 15:32:01 +0000 (0:00:00.972) 0:02:48.933 ********** 2025-06-03 15:40:20.519329 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.519345 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.519359 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.519374 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.519389 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.519404 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.519418 | orchestrator | 2025-06-03 15:40:20.519433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-03 15:40:20.519448 | orchestrator | Tuesday 03 June 2025 15:32:02 +0000 (0:00:01.230) 0:02:50.164 ********** 2025-06-03 15:40:20.519464 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.519478 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.519492 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.519507 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.519521 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.519537 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.519572 | orchestrator | 2025-06-03 15:40:20.519589 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-03 15:40:20.519604 | orchestrator | Tuesday 03 June 2025 15:32:03 +0000 (0:00:01.004) 0:02:51.168 ********** 2025-06-03 15:40:20.519620 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.519636 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.519651 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.519666 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.519682 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.519697 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.519712 | orchestrator | 2025-06-03 15:40:20.519727 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-03 15:40:20.519742 | orchestrator | Tuesday 03 June 2025 15:32:05 +0000 (0:00:01.156) 0:02:52.324 ********** 2025-06-03 15:40:20.519757 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.519772 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.519786 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.519811 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.519827 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.519843 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.519857 | orchestrator | 2025-06-03 15:40:20.519872 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-03 15:40:20.519887 | orchestrator | Tuesday 03 June 2025 15:32:05 +0000 (0:00:00.954) 0:02:53.279 ********** 2025-06-03 15:40:20.519903 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.519916 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.519931 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.519946 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.519961 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.519976 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.519992 | orchestrator | 2025-06-03 15:40:20.520006 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-03 15:40:20.520021 | orchestrator | Tuesday 03 June 2025 15:32:07 +0000 (0:00:01.125) 0:02:54.404 ********** 2025-06-03 15:40:20.520035 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.520050 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.520066 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.520081 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.520096 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.520111 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.520126 | orchestrator | 2025-06-03 15:40:20.520142 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-03 15:40:20.520156 | orchestrator | Tuesday 03 June 2025 15:32:08 +0000 (0:00:01.486) 0:02:55.891 ********** 2025-06-03 15:40:20.520172 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.520189 | orchestrator | 2025-06-03 15:40:20.520204 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-03 15:40:20.520219 | orchestrator | Tuesday 03 June 2025 15:32:10 +0000 (0:00:01.495) 0:02:57.386 ********** 2025-06-03 15:40:20.520233 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-03 15:40:20.520247 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-03 15:40:20.520261 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-03 15:40:20.520276 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-03 15:40:20.520290 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-03 15:40:20.520305 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-03 15:40:20.520320 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-03 15:40:20.520334 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-03 15:40:20.520349 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:20.520365 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:20.520386 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-03 15:40:20.520402 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-03 15:40:20.520417 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:20.520432 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-03 15:40:20.520447 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:20.520461 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:20.520477 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-03 15:40:20.520491 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:20.520505 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:20.520520 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:20.520535 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:20.520625 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:20.520645 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:20.520659 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:20.520675 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:20.520689 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:20.520704 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:20.520719 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:20.520733 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:20.520749 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:20.520764 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:20.520778 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:20.520794 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:20.520809 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:20.520823 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:20.520839 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:20.520854 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:20.520869 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:20.520883 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:20.520898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:20.520913 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:20.520926 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:20.520941 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:20.520956 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:20.520971 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:20.520987 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:20.521001 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:20.521016 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:20.521030 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:20.521043 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:20.521057 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:20.521071 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:20.521084 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:20.521097 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:20.521110 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:20.521124 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:20.521138 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:20.521151 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:20.521166 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:20.521259 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:20.521275 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:20.521289 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:20.521302 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:20.521326 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:20.521340 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:20.521354 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:20.521368 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:20.521382 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:20.521396 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:20.521416 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:20.521430 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:20.521444 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:20.521456 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:20.521470 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:20.521483 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:20.521496 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:20.521510 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:20.521524 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:20.521538 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:20.521578 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:20.521593 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-03 15:40:20.521607 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:20.521621 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-03 15:40:20.521635 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:20.521647 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:20.521660 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-03 15:40:20.521674 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-03 15:40:20.521688 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:20.521702 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-03 15:40:20.521716 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-03 15:40:20.521730 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-03 15:40:20.521744 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-03 15:40:20.521758 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-03 15:40:20.521771 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-03 15:40:20.521785 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-03 15:40:20.521798 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-03 15:40:20.521811 | orchestrator | 2025-06-03 15:40:20.521824 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-03 15:40:20.521837 | orchestrator | Tuesday 03 June 2025 15:32:16 +0000 (0:00:06.454) 0:03:03.841 ********** 2025-06-03 15:40:20.521851 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.521865 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.521877 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.521891 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.521906 | orchestrator | 2025-06-03 15:40:20.521920 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-03 15:40:20.521943 | orchestrator | Tuesday 03 June 2025 15:32:17 +0000 (0:00:00.995) 0:03:04.837 ********** 2025-06-03 15:40:20.521957 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.521971 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.521983 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.521996 | orchestrator | 2025-06-03 15:40:20.522010 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-03 15:40:20.522169 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.733) 0:03:05.570 ********** 2025-06-03 15:40:20.522185 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.522199 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.522212 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.522226 | orchestrator | 2025-06-03 15:40:20.522239 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-03 15:40:20.522254 | orchestrator | Tuesday 03 June 2025 15:32:19 +0000 (0:00:01.402) 0:03:06.972 ********** 2025-06-03 15:40:20.522267 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.522282 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.522296 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.522309 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.522323 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.522337 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.522350 | orchestrator | 2025-06-03 15:40:20.522364 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-03 15:40:20.522378 | orchestrator | Tuesday 03 June 2025 15:32:20 +0000 (0:00:00.775) 0:03:07.748 ********** 2025-06-03 15:40:20.522391 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.522405 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.522419 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.522432 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.522454 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.522467 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.522481 | orchestrator | 2025-06-03 15:40:20.522545 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-03 15:40:20.522609 | orchestrator | Tuesday 03 June 2025 15:32:21 +0000 (0:00:01.182) 0:03:08.930 ********** 2025-06-03 15:40:20.522623 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.522636 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.522650 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.522663 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.522677 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.522692 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.522706 | orchestrator | 2025-06-03 15:40:20.522720 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-03 15:40:20.522734 | orchestrator | Tuesday 03 June 2025 15:32:22 +0000 (0:00:00.763) 0:03:09.694 ********** 2025-06-03 15:40:20.522747 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.522761 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.522827 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.522845 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.522858 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.522872 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.522886 | orchestrator | 2025-06-03 15:40:20.522900 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-03 15:40:20.522924 | orchestrator | Tuesday 03 June 2025 15:32:23 +0000 (0:00:00.817) 0:03:10.512 ********** 2025-06-03 15:40:20.522938 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.522951 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.522965 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.522979 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.522993 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.523007 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.523020 | orchestrator | 2025-06-03 15:40:20.523034 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-03 15:40:20.523048 | orchestrator | Tuesday 03 June 2025 15:32:23 +0000 (0:00:00.665) 0:03:11.177 ********** 2025-06-03 15:40:20.523061 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.523074 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.523088 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.523102 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.523117 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.523131 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.523145 | orchestrator | 2025-06-03 15:40:20.523160 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-03 15:40:20.523174 | orchestrator | Tuesday 03 June 2025 15:32:24 +0000 (0:00:00.945) 0:03:12.123 ********** 2025-06-03 15:40:20.523187 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.523201 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.523214 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.523227 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.523240 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.523254 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.523269 | orchestrator | 2025-06-03 15:40:20.523283 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-03 15:40:20.523296 | orchestrator | Tuesday 03 June 2025 15:32:25 +0000 (0:00:00.820) 0:03:12.944 ********** 2025-06-03 15:40:20.523309 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.523323 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.523337 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.523351 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.523365 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.523379 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.523393 | orchestrator | 2025-06-03 15:40:20.523406 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-03 15:40:20.523420 | orchestrator | Tuesday 03 June 2025 15:32:26 +0000 (0:00:01.046) 0:03:13.990 ********** 2025-06-03 15:40:20.523434 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.523447 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.523459 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.523473 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.523487 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.523502 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.523516 | orchestrator | 2025-06-03 15:40:20.523530 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-03 15:40:20.523544 | orchestrator | Tuesday 03 June 2025 15:32:30 +0000 (0:00:03.467) 0:03:17.457 ********** 2025-06-03 15:40:20.523581 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.523595 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.523608 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.523621 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.523635 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.523649 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.523662 | orchestrator | 2025-06-03 15:40:20.523675 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-03 15:40:20.523688 | orchestrator | Tuesday 03 June 2025 15:32:30 +0000 (0:00:00.840) 0:03:18.297 ********** 2025-06-03 15:40:20.523701 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.523731 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.523745 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.523759 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.523772 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.523786 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.523799 | orchestrator | 2025-06-03 15:40:20.523814 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-03 15:40:20.523827 | orchestrator | Tuesday 03 June 2025 15:32:31 +0000 (0:00:00.721) 0:03:19.019 ********** 2025-06-03 15:40:20.523841 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.523855 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.523868 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.523882 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.523895 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.523908 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.523921 | orchestrator | 2025-06-03 15:40:20.523935 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-03 15:40:20.523957 | orchestrator | Tuesday 03 June 2025 15:32:32 +0000 (0:00:00.774) 0:03:19.793 ********** 2025-06-03 15:40:20.523970 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.523985 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.523998 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.524012 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.524027 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.524041 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.524055 | orchestrator | 2025-06-03 15:40:20.524070 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-03 15:40:20.524135 | orchestrator | Tuesday 03 June 2025 15:32:33 +0000 (0:00:00.630) 0:03:20.424 ********** 2025-06-03 15:40:20.524148 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.524160 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.524171 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.524184 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-03 15:40:20.524198 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-03 15:40:20.524211 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-03 15:40:20.524222 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-03 15:40:20.524234 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.524245 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.524257 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-03 15:40:20.524277 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-03 15:40:20.524289 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.524300 | orchestrator | 2025-06-03 15:40:20.524312 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-03 15:40:20.524324 | orchestrator | Tuesday 03 June 2025 15:32:33 +0000 (0:00:00.799) 0:03:21.223 ********** 2025-06-03 15:40:20.524336 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.524347 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.524358 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.524370 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.524380 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.524392 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.524403 | orchestrator | 2025-06-03 15:40:20.524414 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-03 15:40:20.524426 | orchestrator | Tuesday 03 June 2025 15:32:34 +0000 (0:00:00.712) 0:03:21.936 ********** 2025-06-03 15:40:20.524438 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.524450 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.524461 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.524473 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.524484 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.524495 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.524507 | orchestrator | 2025-06-03 15:40:20.524519 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-03 15:40:20.524531 | orchestrator | Tuesday 03 June 2025 15:32:35 +0000 (0:00:00.787) 0:03:22.723 ********** 2025-06-03 15:40:20.524542 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.524573 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.524586 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.524597 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.524608 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.524626 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.524638 | orchestrator | 2025-06-03 15:40:20.524650 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-03 15:40:20.524662 | orchestrator | Tuesday 03 June 2025 15:32:35 +0000 (0:00:00.536) 0:03:23.259 ********** 2025-06-03 15:40:20.524674 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.524686 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.524698 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.524711 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.524722 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.524734 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.524746 | orchestrator | 2025-06-03 15:40:20.524757 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-03 15:40:20.524769 | orchestrator | Tuesday 03 June 2025 15:32:36 +0000 (0:00:00.684) 0:03:23.944 ********** 2025-06-03 15:40:20.524781 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.524793 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.524804 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.524856 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.524871 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.524883 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.524894 | orchestrator | 2025-06-03 15:40:20.524906 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-03 15:40:20.524917 | orchestrator | Tuesday 03 June 2025 15:32:37 +0000 (0:00:00.570) 0:03:24.515 ********** 2025-06-03 15:40:20.524938 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.524950 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.524962 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.524974 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.524986 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.524998 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.525009 | orchestrator | 2025-06-03 15:40:20.525020 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-03 15:40:20.525032 | orchestrator | Tuesday 03 June 2025 15:32:38 +0000 (0:00:00.910) 0:03:25.425 ********** 2025-06-03 15:40:20.525043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-03 15:40:20.525054 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-03 15:40:20.525065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-03 15:40:20.525077 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.525088 | orchestrator | 2025-06-03 15:40:20.525099 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-03 15:40:20.525110 | orchestrator | Tuesday 03 June 2025 15:32:38 +0000 (0:00:00.382) 0:03:25.807 ********** 2025-06-03 15:40:20.525122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-03 15:40:20.525133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-03 15:40:20.525145 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-03 15:40:20.525157 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.525169 | orchestrator | 2025-06-03 15:40:20.525181 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-03 15:40:20.525193 | orchestrator | Tuesday 03 June 2025 15:32:38 +0000 (0:00:00.423) 0:03:26.231 ********** 2025-06-03 15:40:20.525205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-03 15:40:20.525216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-03 15:40:20.525227 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-03 15:40:20.525238 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.525250 | orchestrator | 2025-06-03 15:40:20.525262 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-03 15:40:20.525273 | orchestrator | Tuesday 03 June 2025 15:32:39 +0000 (0:00:00.423) 0:03:26.655 ********** 2025-06-03 15:40:20.525284 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.525295 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.525307 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.525318 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.525330 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.525343 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.525353 | orchestrator | 2025-06-03 15:40:20.525364 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-03 15:40:20.525376 | orchestrator | Tuesday 03 June 2025 15:32:39 +0000 (0:00:00.643) 0:03:27.298 ********** 2025-06-03 15:40:20.525388 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-03 15:40:20.525399 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.525410 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-03 15:40:20.525422 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.525434 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-03 15:40:20.525445 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-03 15:40:20.525456 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.525468 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-03 15:40:20.525478 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-03 15:40:20.525489 | orchestrator | 2025-06-03 15:40:20.525501 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-03 15:40:20.525512 | orchestrator | Tuesday 03 June 2025 15:32:41 +0000 (0:00:01.881) 0:03:29.180 ********** 2025-06-03 15:40:20.525524 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.525548 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.525609 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.525622 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.525634 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.525644 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.525656 | orchestrator | 2025-06-03 15:40:20.525668 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:20.525678 | orchestrator | Tuesday 03 June 2025 15:32:44 +0000 (0:00:02.427) 0:03:31.607 ********** 2025-06-03 15:40:20.525689 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.525700 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.525712 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.525723 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.525735 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.525747 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.525759 | orchestrator | 2025-06-03 15:40:20.525777 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-03 15:40:20.525789 | orchestrator | Tuesday 03 June 2025 15:32:45 +0000 (0:00:01.101) 0:03:32.709 ********** 2025-06-03 15:40:20.525801 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.525813 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.525825 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.525837 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.525849 | orchestrator | 2025-06-03 15:40:20.525861 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-03 15:40:20.525872 | orchestrator | Tuesday 03 June 2025 15:32:46 +0000 (0:00:00.882) 0:03:33.592 ********** 2025-06-03 15:40:20.525884 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.525894 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.525906 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.525917 | orchestrator | 2025-06-03 15:40:20.525929 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-03 15:40:20.525983 | orchestrator | Tuesday 03 June 2025 15:32:46 +0000 (0:00:00.278) 0:03:33.871 ********** 2025-06-03 15:40:20.525997 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.526009 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.526055 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.526069 | orchestrator | 2025-06-03 15:40:20.526080 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-03 15:40:20.526093 | orchestrator | Tuesday 03 June 2025 15:32:47 +0000 (0:00:01.326) 0:03:35.198 ********** 2025-06-03 15:40:20.526105 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:20.526118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:20.526131 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:20.526143 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.526155 | orchestrator | 2025-06-03 15:40:20.526168 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-03 15:40:20.526180 | orchestrator | Tuesday 03 June 2025 15:32:48 +0000 (0:00:00.565) 0:03:35.763 ********** 2025-06-03 15:40:20.526193 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.526205 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.526217 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.526229 | orchestrator | 2025-06-03 15:40:20.526241 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-03 15:40:20.526253 | orchestrator | Tuesday 03 June 2025 15:32:48 +0000 (0:00:00.315) 0:03:36.079 ********** 2025-06-03 15:40:20.526265 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.526277 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.526289 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.526302 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.526315 | orchestrator | 2025-06-03 15:40:20.526336 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-03 15:40:20.526348 | orchestrator | Tuesday 03 June 2025 15:32:49 +0000 (0:00:00.938) 0:03:37.017 ********** 2025-06-03 15:40:20.526361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.526373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.526385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.526398 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.526410 | orchestrator | 2025-06-03 15:40:20.526423 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-03 15:40:20.526436 | orchestrator | Tuesday 03 June 2025 15:32:50 +0000 (0:00:00.374) 0:03:37.392 ********** 2025-06-03 15:40:20.526449 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.526460 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.526472 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.526484 | orchestrator | 2025-06-03 15:40:20.526497 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-03 15:40:20.526508 | orchestrator | Tuesday 03 June 2025 15:32:50 +0000 (0:00:00.318) 0:03:37.711 ********** 2025-06-03 15:40:20.526521 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.526533 | orchestrator | 2025-06-03 15:40:20.526545 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-03 15:40:20.526572 | orchestrator | Tuesday 03 June 2025 15:32:50 +0000 (0:00:00.217) 0:03:37.928 ********** 2025-06-03 15:40:20.526584 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.526596 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.526608 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.526619 | orchestrator | 2025-06-03 15:40:20.526631 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-03 15:40:20.526644 | orchestrator | Tuesday 03 June 2025 15:32:50 +0000 (0:00:00.312) 0:03:38.241 ********** 2025-06-03 15:40:20.526656 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.526667 | orchestrator | 2025-06-03 15:40:20.526679 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-03 15:40:20.526692 | orchestrator | Tuesday 03 June 2025 15:32:51 +0000 (0:00:00.214) 0:03:38.456 ********** 2025-06-03 15:40:20.526704 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.526716 | orchestrator | 2025-06-03 15:40:20.526729 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-03 15:40:20.526742 | orchestrator | Tuesday 03 June 2025 15:32:51 +0000 (0:00:00.204) 0:03:38.660 ********** 2025-06-03 15:40:20.526754 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.526767 | orchestrator | 2025-06-03 15:40:20.526779 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-03 15:40:20.526791 | orchestrator | Tuesday 03 June 2025 15:32:51 +0000 (0:00:00.283) 0:03:38.944 ********** 2025-06-03 15:40:20.526803 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.526815 | orchestrator | 2025-06-03 15:40:20.526827 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-03 15:40:20.526839 | orchestrator | Tuesday 03 June 2025 15:32:51 +0000 (0:00:00.204) 0:03:39.149 ********** 2025-06-03 15:40:20.526851 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.526869 | orchestrator | 2025-06-03 15:40:20.526882 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-03 15:40:20.526894 | orchestrator | Tuesday 03 June 2025 15:32:52 +0000 (0:00:00.331) 0:03:39.480 ********** 2025-06-03 15:40:20.526906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.526919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.526931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.526944 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.526956 | orchestrator | 2025-06-03 15:40:20.526968 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-03 15:40:20.526988 | orchestrator | Tuesday 03 June 2025 15:32:52 +0000 (0:00:00.344) 0:03:39.825 ********** 2025-06-03 15:40:20.527000 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.527011 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.527024 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.527035 | orchestrator | 2025-06-03 15:40:20.527091 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-03 15:40:20.527105 | orchestrator | Tuesday 03 June 2025 15:32:52 +0000 (0:00:00.315) 0:03:40.140 ********** 2025-06-03 15:40:20.527117 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.527128 | orchestrator | 2025-06-03 15:40:20.527139 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-03 15:40:20.527149 | orchestrator | Tuesday 03 June 2025 15:32:53 +0000 (0:00:00.199) 0:03:40.339 ********** 2025-06-03 15:40:20.527159 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.527168 | orchestrator | 2025-06-03 15:40:20.527178 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-03 15:40:20.527188 | orchestrator | Tuesday 03 June 2025 15:32:53 +0000 (0:00:00.204) 0:03:40.544 ********** 2025-06-03 15:40:20.527197 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.527206 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.527216 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.527225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.527235 | orchestrator | 2025-06-03 15:40:20.527244 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-03 15:40:20.527254 | orchestrator | Tuesday 03 June 2025 15:32:54 +0000 (0:00:00.937) 0:03:41.481 ********** 2025-06-03 15:40:20.527264 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.527274 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.527284 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.527293 | orchestrator | 2025-06-03 15:40:20.527302 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-03 15:40:20.527311 | orchestrator | Tuesday 03 June 2025 15:32:54 +0000 (0:00:00.303) 0:03:41.784 ********** 2025-06-03 15:40:20.527322 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.527332 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.527342 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.527351 | orchestrator | 2025-06-03 15:40:20.527361 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-03 15:40:20.527371 | orchestrator | Tuesday 03 June 2025 15:32:55 +0000 (0:00:01.259) 0:03:43.043 ********** 2025-06-03 15:40:20.527380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.527391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.527400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.527410 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.527420 | orchestrator | 2025-06-03 15:40:20.527430 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-03 15:40:20.527440 | orchestrator | Tuesday 03 June 2025 15:32:56 +0000 (0:00:01.066) 0:03:44.110 ********** 2025-06-03 15:40:20.527450 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.527460 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.527471 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.527481 | orchestrator | 2025-06-03 15:40:20.527491 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-03 15:40:20.527501 | orchestrator | Tuesday 03 June 2025 15:32:57 +0000 (0:00:00.306) 0:03:44.417 ********** 2025-06-03 15:40:20.527511 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.527520 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.527531 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.527540 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.527571 | orchestrator | 2025-06-03 15:40:20.527582 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-03 15:40:20.527593 | orchestrator | Tuesday 03 June 2025 15:32:57 +0000 (0:00:00.866) 0:03:45.284 ********** 2025-06-03 15:40:20.527604 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.527614 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.527625 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.527635 | orchestrator | 2025-06-03 15:40:20.527645 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-03 15:40:20.527656 | orchestrator | Tuesday 03 June 2025 15:32:58 +0000 (0:00:00.292) 0:03:45.576 ********** 2025-06-03 15:40:20.527666 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.527675 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.527685 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.527695 | orchestrator | 2025-06-03 15:40:20.527705 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-03 15:40:20.527715 | orchestrator | Tuesday 03 June 2025 15:32:59 +0000 (0:00:01.240) 0:03:46.817 ********** 2025-06-03 15:40:20.527725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.527736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.527746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.527756 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.527766 | orchestrator | 2025-06-03 15:40:20.527776 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-03 15:40:20.527792 | orchestrator | Tuesday 03 June 2025 15:33:00 +0000 (0:00:00.695) 0:03:47.512 ********** 2025-06-03 15:40:20.527802 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.527813 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.527823 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.527832 | orchestrator | 2025-06-03 15:40:20.527843 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-03 15:40:20.527852 | orchestrator | Tuesday 03 June 2025 15:33:00 +0000 (0:00:00.357) 0:03:47.870 ********** 2025-06-03 15:40:20.527862 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.527872 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.527882 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.527892 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.527903 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.527912 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.527922 | orchestrator | 2025-06-03 15:40:20.527933 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-03 15:40:20.527943 | orchestrator | Tuesday 03 June 2025 15:33:01 +0000 (0:00:00.689) 0:03:48.559 ********** 2025-06-03 15:40:20.527988 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.528000 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.528010 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.528020 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.528030 | orchestrator | 2025-06-03 15:40:20.528040 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-03 15:40:20.528050 | orchestrator | Tuesday 03 June 2025 15:33:02 +0000 (0:00:00.976) 0:03:49.536 ********** 2025-06-03 15:40:20.528060 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.528071 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.528081 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.528091 | orchestrator | 2025-06-03 15:40:20.528101 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-03 15:40:20.528112 | orchestrator | Tuesday 03 June 2025 15:33:02 +0000 (0:00:00.332) 0:03:49.869 ********** 2025-06-03 15:40:20.528122 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.528132 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.528142 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.528153 | orchestrator | 2025-06-03 15:40:20.528164 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-03 15:40:20.528181 | orchestrator | Tuesday 03 June 2025 15:33:03 +0000 (0:00:01.213) 0:03:51.082 ********** 2025-06-03 15:40:20.528191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:20.528201 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:20.528211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:20.528220 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.528230 | orchestrator | 2025-06-03 15:40:20.528240 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-03 15:40:20.528250 | orchestrator | Tuesday 03 June 2025 15:33:04 +0000 (0:00:00.811) 0:03:51.894 ********** 2025-06-03 15:40:20.528259 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.528269 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.528278 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.528288 | orchestrator | 2025-06-03 15:40:20.528297 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-03 15:40:20.528307 | orchestrator | 2025-06-03 15:40:20.528317 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:20.528326 | orchestrator | Tuesday 03 June 2025 15:33:05 +0000 (0:00:00.732) 0:03:52.626 ********** 2025-06-03 15:40:20.528335 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.528345 | orchestrator | 2025-06-03 15:40:20.528355 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:20.528366 | orchestrator | Tuesday 03 June 2025 15:33:05 +0000 (0:00:00.450) 0:03:53.077 ********** 2025-06-03 15:40:20.528376 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.528386 | orchestrator | 2025-06-03 15:40:20.528397 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:20.528407 | orchestrator | Tuesday 03 June 2025 15:33:06 +0000 (0:00:00.602) 0:03:53.679 ********** 2025-06-03 15:40:20.528417 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.528427 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.528437 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.528447 | orchestrator | 2025-06-03 15:40:20.528458 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:20.528468 | orchestrator | Tuesday 03 June 2025 15:33:07 +0000 (0:00:00.687) 0:03:54.367 ********** 2025-06-03 15:40:20.528479 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.528488 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.528498 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.528508 | orchestrator | 2025-06-03 15:40:20.528517 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:20.528528 | orchestrator | Tuesday 03 June 2025 15:33:07 +0000 (0:00:00.312) 0:03:54.679 ********** 2025-06-03 15:40:20.528537 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.528547 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.528571 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.528581 | orchestrator | 2025-06-03 15:40:20.528592 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:20.528602 | orchestrator | Tuesday 03 June 2025 15:33:07 +0000 (0:00:00.295) 0:03:54.974 ********** 2025-06-03 15:40:20.528612 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.528622 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.528632 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.528642 | orchestrator | 2025-06-03 15:40:20.528652 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:20.528663 | orchestrator | Tuesday 03 June 2025 15:33:08 +0000 (0:00:00.479) 0:03:55.454 ********** 2025-06-03 15:40:20.528673 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.528689 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.528706 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.528716 | orchestrator | 2025-06-03 15:40:20.528725 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:20.528734 | orchestrator | Tuesday 03 June 2025 15:33:08 +0000 (0:00:00.694) 0:03:56.148 ********** 2025-06-03 15:40:20.528744 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.528755 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.528765 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.528775 | orchestrator | 2025-06-03 15:40:20.528786 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:20.528796 | orchestrator | Tuesday 03 June 2025 15:33:09 +0000 (0:00:00.301) 0:03:56.449 ********** 2025-06-03 15:40:20.528806 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.528816 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.528825 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.528835 | orchestrator | 2025-06-03 15:40:20.528845 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:20.528888 | orchestrator | Tuesday 03 June 2025 15:33:09 +0000 (0:00:00.265) 0:03:56.714 ********** 2025-06-03 15:40:20.528900 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.528911 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.528921 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.528931 | orchestrator | 2025-06-03 15:40:20.528941 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:20.528951 | orchestrator | Tuesday 03 June 2025 15:33:10 +0000 (0:00:00.996) 0:03:57.710 ********** 2025-06-03 15:40:20.528960 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.528970 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.528981 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.528991 | orchestrator | 2025-06-03 15:40:20.529001 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:20.529011 | orchestrator | Tuesday 03 June 2025 15:33:11 +0000 (0:00:00.713) 0:03:58.424 ********** 2025-06-03 15:40:20.529022 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.529032 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.529043 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.529053 | orchestrator | 2025-06-03 15:40:20.529063 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:20.529073 | orchestrator | Tuesday 03 June 2025 15:33:11 +0000 (0:00:00.268) 0:03:58.692 ********** 2025-06-03 15:40:20.529083 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.529094 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.529104 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.529114 | orchestrator | 2025-06-03 15:40:20.529123 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:20.529133 | orchestrator | Tuesday 03 June 2025 15:33:11 +0000 (0:00:00.309) 0:03:59.001 ********** 2025-06-03 15:40:20.529143 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.529153 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.529164 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.529174 | orchestrator | 2025-06-03 15:40:20.529184 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:20.529194 | orchestrator | Tuesday 03 June 2025 15:33:12 +0000 (0:00:00.468) 0:03:59.470 ********** 2025-06-03 15:40:20.529204 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.529214 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.529223 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.529233 | orchestrator | 2025-06-03 15:40:20.529243 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:20.529254 | orchestrator | Tuesday 03 June 2025 15:33:12 +0000 (0:00:00.296) 0:03:59.766 ********** 2025-06-03 15:40:20.529264 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.529274 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.529284 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.529294 | orchestrator | 2025-06-03 15:40:20.529317 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:20.529327 | orchestrator | Tuesday 03 June 2025 15:33:12 +0000 (0:00:00.284) 0:04:00.051 ********** 2025-06-03 15:40:20.529337 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.529347 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.529357 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.529367 | orchestrator | 2025-06-03 15:40:20.529378 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:20.529389 | orchestrator | Tuesday 03 June 2025 15:33:13 +0000 (0:00:00.264) 0:04:00.315 ********** 2025-06-03 15:40:20.529399 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.529410 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.529421 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.529431 | orchestrator | 2025-06-03 15:40:20.529441 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:20.529452 | orchestrator | Tuesday 03 June 2025 15:33:13 +0000 (0:00:00.473) 0:04:00.789 ********** 2025-06-03 15:40:20.529462 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.529472 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.529482 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.529491 | orchestrator | 2025-06-03 15:40:20.529501 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:20.529511 | orchestrator | Tuesday 03 June 2025 15:33:13 +0000 (0:00:00.323) 0:04:01.113 ********** 2025-06-03 15:40:20.529520 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.529531 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.529541 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.529551 | orchestrator | 2025-06-03 15:40:20.529606 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:20.529616 | orchestrator | Tuesday 03 June 2025 15:33:14 +0000 (0:00:00.319) 0:04:01.432 ********** 2025-06-03 15:40:20.529626 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.529637 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.529646 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.529656 | orchestrator | 2025-06-03 15:40:20.529667 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-03 15:40:20.529677 | orchestrator | Tuesday 03 June 2025 15:33:14 +0000 (0:00:00.829) 0:04:02.262 ********** 2025-06-03 15:40:20.529687 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.529697 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.529707 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.529717 | orchestrator | 2025-06-03 15:40:20.529731 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-03 15:40:20.529742 | orchestrator | Tuesday 03 June 2025 15:33:15 +0000 (0:00:00.405) 0:04:02.667 ********** 2025-06-03 15:40:20.529752 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.529761 | orchestrator | 2025-06-03 15:40:20.529771 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-03 15:40:20.529781 | orchestrator | Tuesday 03 June 2025 15:33:15 +0000 (0:00:00.555) 0:04:03.222 ********** 2025-06-03 15:40:20.529791 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.529801 | orchestrator | 2025-06-03 15:40:20.529811 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-03 15:40:20.529821 | orchestrator | Tuesday 03 June 2025 15:33:16 +0000 (0:00:00.122) 0:04:03.345 ********** 2025-06-03 15:40:20.529831 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-03 15:40:20.529841 | orchestrator | 2025-06-03 15:40:20.529888 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-03 15:40:20.529900 | orchestrator | Tuesday 03 June 2025 15:33:17 +0000 (0:00:01.505) 0:04:04.850 ********** 2025-06-03 15:40:20.529910 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.529920 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.529930 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.529940 | orchestrator | 2025-06-03 15:40:20.529957 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-03 15:40:20.529968 | orchestrator | Tuesday 03 June 2025 15:33:18 +0000 (0:00:00.469) 0:04:05.320 ********** 2025-06-03 15:40:20.529978 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.529988 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.529998 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.530007 | orchestrator | 2025-06-03 15:40:20.530067 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-03 15:40:20.530081 | orchestrator | Tuesday 03 June 2025 15:33:18 +0000 (0:00:00.471) 0:04:05.791 ********** 2025-06-03 15:40:20.530091 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.530101 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.530111 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.530121 | orchestrator | 2025-06-03 15:40:20.530130 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-03 15:40:20.530140 | orchestrator | Tuesday 03 June 2025 15:33:19 +0000 (0:00:01.323) 0:04:07.114 ********** 2025-06-03 15:40:20.530150 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.530160 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.530170 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.530179 | orchestrator | 2025-06-03 15:40:20.530188 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-03 15:40:20.530198 | orchestrator | Tuesday 03 June 2025 15:33:21 +0000 (0:00:01.419) 0:04:08.534 ********** 2025-06-03 15:40:20.530208 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.530217 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.530226 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.530236 | orchestrator | 2025-06-03 15:40:20.530246 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-03 15:40:20.530256 | orchestrator | Tuesday 03 June 2025 15:33:22 +0000 (0:00:00.861) 0:04:09.396 ********** 2025-06-03 15:40:20.530266 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.530276 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.530286 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.530297 | orchestrator | 2025-06-03 15:40:20.530306 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-03 15:40:20.530316 | orchestrator | Tuesday 03 June 2025 15:33:22 +0000 (0:00:00.900) 0:04:10.297 ********** 2025-06-03 15:40:20.530326 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.530335 | orchestrator | 2025-06-03 15:40:20.530345 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-03 15:40:20.530355 | orchestrator | Tuesday 03 June 2025 15:33:24 +0000 (0:00:01.324) 0:04:11.622 ********** 2025-06-03 15:40:20.530364 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.530374 | orchestrator | 2025-06-03 15:40:20.530384 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-03 15:40:20.530395 | orchestrator | Tuesday 03 June 2025 15:33:25 +0000 (0:00:00.814) 0:04:12.437 ********** 2025-06-03 15:40:20.530405 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:40:20.530415 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.530426 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.530437 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:40:20.530447 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-03 15:40:20.530457 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:40:20.530468 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:40:20.530478 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-03 15:40:20.530488 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:40:20.530498 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-03 15:40:20.530508 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-03 15:40:20.530525 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-03 15:40:20.530535 | orchestrator | 2025-06-03 15:40:20.530546 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-03 15:40:20.530571 | orchestrator | Tuesday 03 June 2025 15:33:28 +0000 (0:00:03.332) 0:04:15.769 ********** 2025-06-03 15:40:20.530580 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.530590 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.530599 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.530609 | orchestrator | 2025-06-03 15:40:20.530619 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-03 15:40:20.530629 | orchestrator | Tuesday 03 June 2025 15:33:30 +0000 (0:00:01.594) 0:04:17.364 ********** 2025-06-03 15:40:20.530639 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.530662 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.530673 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.530682 | orchestrator | 2025-06-03 15:40:20.530691 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-03 15:40:20.530701 | orchestrator | Tuesday 03 June 2025 15:33:30 +0000 (0:00:00.305) 0:04:17.669 ********** 2025-06-03 15:40:20.530711 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.530720 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.530730 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.530739 | orchestrator | 2025-06-03 15:40:20.530748 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-03 15:40:20.530758 | orchestrator | Tuesday 03 June 2025 15:33:30 +0000 (0:00:00.283) 0:04:17.952 ********** 2025-06-03 15:40:20.530769 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.530779 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.530789 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.530799 | orchestrator | 2025-06-03 15:40:20.530809 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-03 15:40:20.530860 | orchestrator | Tuesday 03 June 2025 15:33:32 +0000 (0:00:01.997) 0:04:19.950 ********** 2025-06-03 15:40:20.530874 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.530884 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.530894 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.530905 | orchestrator | 2025-06-03 15:40:20.530915 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-03 15:40:20.530925 | orchestrator | Tuesday 03 June 2025 15:33:34 +0000 (0:00:01.618) 0:04:21.568 ********** 2025-06-03 15:40:20.530935 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.530945 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.530955 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.530965 | orchestrator | 2025-06-03 15:40:20.530974 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-03 15:40:20.530984 | orchestrator | Tuesday 03 June 2025 15:33:34 +0000 (0:00:00.278) 0:04:21.847 ********** 2025-06-03 15:40:20.530995 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.531004 | orchestrator | 2025-06-03 15:40:20.531014 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-03 15:40:20.531023 | orchestrator | Tuesday 03 June 2025 15:33:35 +0000 (0:00:00.521) 0:04:22.368 ********** 2025-06-03 15:40:20.531033 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.531042 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.531052 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.531062 | orchestrator | 2025-06-03 15:40:20.531072 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-03 15:40:20.531082 | orchestrator | Tuesday 03 June 2025 15:33:35 +0000 (0:00:00.513) 0:04:22.881 ********** 2025-06-03 15:40:20.531093 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.531103 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.531112 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.531123 | orchestrator | 2025-06-03 15:40:20.531141 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-03 15:40:20.531150 | orchestrator | Tuesday 03 June 2025 15:33:35 +0000 (0:00:00.320) 0:04:23.201 ********** 2025-06-03 15:40:20.531158 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.531167 | orchestrator | 2025-06-03 15:40:20.531176 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-03 15:40:20.531184 | orchestrator | Tuesday 03 June 2025 15:33:36 +0000 (0:00:00.553) 0:04:23.755 ********** 2025-06-03 15:40:20.531192 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.531201 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.531210 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.531218 | orchestrator | 2025-06-03 15:40:20.531226 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-03 15:40:20.531235 | orchestrator | Tuesday 03 June 2025 15:33:38 +0000 (0:00:01.978) 0:04:25.734 ********** 2025-06-03 15:40:20.531244 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.531253 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.531262 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.531270 | orchestrator | 2025-06-03 15:40:20.531279 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-03 15:40:20.531287 | orchestrator | Tuesday 03 June 2025 15:33:39 +0000 (0:00:01.259) 0:04:26.993 ********** 2025-06-03 15:40:20.531296 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.531305 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.531314 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.531322 | orchestrator | 2025-06-03 15:40:20.531331 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-03 15:40:20.531339 | orchestrator | Tuesday 03 June 2025 15:33:41 +0000 (0:00:01.985) 0:04:28.978 ********** 2025-06-03 15:40:20.531348 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.531356 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.531365 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.531374 | orchestrator | 2025-06-03 15:40:20.531383 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-03 15:40:20.531392 | orchestrator | Tuesday 03 June 2025 15:33:43 +0000 (0:00:02.070) 0:04:31.049 ********** 2025-06-03 15:40:20.531402 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.531410 | orchestrator | 2025-06-03 15:40:20.531419 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-03 15:40:20.531428 | orchestrator | Tuesday 03 June 2025 15:33:44 +0000 (0:00:00.856) 0:04:31.906 ********** 2025-06-03 15:40:20.531437 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.531446 | orchestrator | 2025-06-03 15:40:20.531455 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-03 15:40:20.531464 | orchestrator | Tuesday 03 June 2025 15:33:46 +0000 (0:00:01.414) 0:04:33.321 ********** 2025-06-03 15:40:20.531473 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.531482 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.531491 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.531499 | orchestrator | 2025-06-03 15:40:20.531513 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-03 15:40:20.531523 | orchestrator | Tuesday 03 June 2025 15:33:56 +0000 (0:00:10.606) 0:04:43.928 ********** 2025-06-03 15:40:20.531531 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.531541 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.531550 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.531572 | orchestrator | 2025-06-03 15:40:20.531582 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-03 15:40:20.531591 | orchestrator | Tuesday 03 June 2025 15:33:57 +0000 (0:00:00.554) 0:04:44.482 ********** 2025-06-03 15:40:20.531632 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4f0adfd166d75e06af6f534cf990accfc73539e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-03 15:40:20.531652 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4f0adfd166d75e06af6f534cf990accfc73539e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-03 15:40:20.531662 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4f0adfd166d75e06af6f534cf990accfc73539e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-03 15:40:20.531672 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4f0adfd166d75e06af6f534cf990accfc73539e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-03 15:40:20.531681 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4f0adfd166d75e06af6f534cf990accfc73539e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-03 15:40:20.531691 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4f0adfd166d75e06af6f534cf990accfc73539e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b4f0adfd166d75e06af6f534cf990accfc73539e'}])  2025-06-03 15:40:20.531701 | orchestrator | 2025-06-03 15:40:20.531710 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:20.531719 | orchestrator | Tuesday 03 June 2025 15:34:13 +0000 (0:00:16.295) 0:05:00.778 ********** 2025-06-03 15:40:20.531727 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.531737 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.531746 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.531754 | orchestrator | 2025-06-03 15:40:20.531763 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-03 15:40:20.531772 | orchestrator | Tuesday 03 June 2025 15:34:13 +0000 (0:00:00.353) 0:05:01.131 ********** 2025-06-03 15:40:20.531781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.531790 | orchestrator | 2025-06-03 15:40:20.531799 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-03 15:40:20.531807 | orchestrator | Tuesday 03 June 2025 15:34:14 +0000 (0:00:00.774) 0:05:01.905 ********** 2025-06-03 15:40:20.531815 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.531824 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.531832 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.531841 | orchestrator | 2025-06-03 15:40:20.531849 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-03 15:40:20.531857 | orchestrator | Tuesday 03 June 2025 15:34:14 +0000 (0:00:00.363) 0:05:02.269 ********** 2025-06-03 15:40:20.531866 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.531875 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.531890 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.531900 | orchestrator | 2025-06-03 15:40:20.531909 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-03 15:40:20.531918 | orchestrator | Tuesday 03 June 2025 15:34:15 +0000 (0:00:00.350) 0:05:02.620 ********** 2025-06-03 15:40:20.531931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:20.531941 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:20.531949 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:20.531958 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.531967 | orchestrator | 2025-06-03 15:40:20.531976 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-03 15:40:20.531984 | orchestrator | Tuesday 03 June 2025 15:34:16 +0000 (0:00:00.985) 0:05:03.605 ********** 2025-06-03 15:40:20.531993 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.532002 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.532010 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.532019 | orchestrator | 2025-06-03 15:40:20.532028 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-03 15:40:20.532036 | orchestrator | 2025-06-03 15:40:20.532045 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:20.532054 | orchestrator | Tuesday 03 June 2025 15:34:17 +0000 (0:00:00.788) 0:05:04.393 ********** 2025-06-03 15:40:20.532088 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.532099 | orchestrator | 2025-06-03 15:40:20.532108 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:20.532116 | orchestrator | Tuesday 03 June 2025 15:34:17 +0000 (0:00:00.524) 0:05:04.918 ********** 2025-06-03 15:40:20.532125 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.532135 | orchestrator | 2025-06-03 15:40:20.532143 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:20.532150 | orchestrator | Tuesday 03 June 2025 15:34:18 +0000 (0:00:00.650) 0:05:05.569 ********** 2025-06-03 15:40:20.532158 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.532165 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.532173 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.532182 | orchestrator | 2025-06-03 15:40:20.532191 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:20.532199 | orchestrator | Tuesday 03 June 2025 15:34:19 +0000 (0:00:00.808) 0:05:06.377 ********** 2025-06-03 15:40:20.532208 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.532216 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.532224 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.532232 | orchestrator | 2025-06-03 15:40:20.532240 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:20.532248 | orchestrator | Tuesday 03 June 2025 15:34:19 +0000 (0:00:00.437) 0:05:06.815 ********** 2025-06-03 15:40:20.532257 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.532266 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.532274 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.532282 | orchestrator | 2025-06-03 15:40:20.532290 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:20.532298 | orchestrator | Tuesday 03 June 2025 15:34:20 +0000 (0:00:00.686) 0:05:07.501 ********** 2025-06-03 15:40:20.532307 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.532315 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.532324 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.532332 | orchestrator | 2025-06-03 15:40:20.532341 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:20.532349 | orchestrator | Tuesday 03 June 2025 15:34:20 +0000 (0:00:00.314) 0:05:07.816 ********** 2025-06-03 15:40:20.532367 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.532375 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.532383 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.532392 | orchestrator | 2025-06-03 15:40:20.532400 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:20.532408 | orchestrator | Tuesday 03 June 2025 15:34:21 +0000 (0:00:00.810) 0:05:08.626 ********** 2025-06-03 15:40:20.532417 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.532426 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.532435 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.532443 | orchestrator | 2025-06-03 15:40:20.532451 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:20.532460 | orchestrator | Tuesday 03 June 2025 15:34:21 +0000 (0:00:00.430) 0:05:09.057 ********** 2025-06-03 15:40:20.532469 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.532477 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.532485 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.532494 | orchestrator | 2025-06-03 15:40:20.532503 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:20.532511 | orchestrator | Tuesday 03 June 2025 15:34:22 +0000 (0:00:00.617) 0:05:09.674 ********** 2025-06-03 15:40:20.532519 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.532528 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.532537 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.532546 | orchestrator | 2025-06-03 15:40:20.532568 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:20.532577 | orchestrator | Tuesday 03 June 2025 15:34:23 +0000 (0:00:00.979) 0:05:10.654 ********** 2025-06-03 15:40:20.532585 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.532594 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.532603 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.532611 | orchestrator | 2025-06-03 15:40:20.532620 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:20.532629 | orchestrator | Tuesday 03 June 2025 15:34:24 +0000 (0:00:00.885) 0:05:11.540 ********** 2025-06-03 15:40:20.532637 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.532646 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.532654 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.532663 | orchestrator | 2025-06-03 15:40:20.532671 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:20.532680 | orchestrator | Tuesday 03 June 2025 15:34:24 +0000 (0:00:00.312) 0:05:11.853 ********** 2025-06-03 15:40:20.532689 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.532698 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.532706 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.532714 | orchestrator | 2025-06-03 15:40:20.532729 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:20.532738 | orchestrator | Tuesday 03 June 2025 15:34:25 +0000 (0:00:00.523) 0:05:12.376 ********** 2025-06-03 15:40:20.532747 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.532756 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.532765 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.532774 | orchestrator | 2025-06-03 15:40:20.532784 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:20.532793 | orchestrator | Tuesday 03 June 2025 15:34:25 +0000 (0:00:00.301) 0:05:12.677 ********** 2025-06-03 15:40:20.532802 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.532810 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.532818 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.532827 | orchestrator | 2025-06-03 15:40:20.532834 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:20.532843 | orchestrator | Tuesday 03 June 2025 15:34:25 +0000 (0:00:00.274) 0:05:12.952 ********** 2025-06-03 15:40:20.532887 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.532896 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.532911 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.532920 | orchestrator | 2025-06-03 15:40:20.532928 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:20.532936 | orchestrator | Tuesday 03 June 2025 15:34:25 +0000 (0:00:00.304) 0:05:13.257 ********** 2025-06-03 15:40:20.532944 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.532952 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.532960 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.532968 | orchestrator | 2025-06-03 15:40:20.532977 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:20.532985 | orchestrator | Tuesday 03 June 2025 15:34:26 +0000 (0:00:00.525) 0:05:13.782 ********** 2025-06-03 15:40:20.532994 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.533002 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.533011 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.533020 | orchestrator | 2025-06-03 15:40:20.533028 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:20.533036 | orchestrator | Tuesday 03 June 2025 15:34:26 +0000 (0:00:00.344) 0:05:14.126 ********** 2025-06-03 15:40:20.533044 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.533052 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.533060 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.533068 | orchestrator | 2025-06-03 15:40:20.533076 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:20.533084 | orchestrator | Tuesday 03 June 2025 15:34:27 +0000 (0:00:00.412) 0:05:14.539 ********** 2025-06-03 15:40:20.533093 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.533101 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.533109 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.533117 | orchestrator | 2025-06-03 15:40:20.533126 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:20.533134 | orchestrator | Tuesday 03 June 2025 15:34:27 +0000 (0:00:00.351) 0:05:14.891 ********** 2025-06-03 15:40:20.533142 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.533150 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.533157 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.533165 | orchestrator | 2025-06-03 15:40:20.533174 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-03 15:40:20.533181 | orchestrator | Tuesday 03 June 2025 15:34:28 +0000 (0:00:00.770) 0:05:15.661 ********** 2025-06-03 15:40:20.533190 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:20.533198 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:20.533206 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:20.533213 | orchestrator | 2025-06-03 15:40:20.533221 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-03 15:40:20.533229 | orchestrator | Tuesday 03 June 2025 15:34:28 +0000 (0:00:00.636) 0:05:16.298 ********** 2025-06-03 15:40:20.533237 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.533245 | orchestrator | 2025-06-03 15:40:20.533254 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-03 15:40:20.533262 | orchestrator | Tuesday 03 June 2025 15:34:29 +0000 (0:00:00.557) 0:05:16.855 ********** 2025-06-03 15:40:20.533271 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.533279 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.533287 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.533295 | orchestrator | 2025-06-03 15:40:20.533303 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-03 15:40:20.533311 | orchestrator | Tuesday 03 June 2025 15:34:30 +0000 (0:00:00.886) 0:05:17.742 ********** 2025-06-03 15:40:20.533319 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.533327 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.533342 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.533350 | orchestrator | 2025-06-03 15:40:20.533358 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-03 15:40:20.533366 | orchestrator | Tuesday 03 June 2025 15:34:30 +0000 (0:00:00.328) 0:05:18.071 ********** 2025-06-03 15:40:20.533375 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:40:20.533383 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:40:20.533390 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:40:20.533397 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-03 15:40:20.533405 | orchestrator | 2025-06-03 15:40:20.533412 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-03 15:40:20.533420 | orchestrator | Tuesday 03 June 2025 15:34:42 +0000 (0:00:11.278) 0:05:29.349 ********** 2025-06-03 15:40:20.533428 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.533435 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.533443 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.533451 | orchestrator | 2025-06-03 15:40:20.533458 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-03 15:40:20.533471 | orchestrator | Tuesday 03 June 2025 15:34:42 +0000 (0:00:00.351) 0:05:29.701 ********** 2025-06-03 15:40:20.533480 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-03 15:40:20.533488 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-03 15:40:20.533496 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-03 15:40:20.533504 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-03 15:40:20.533513 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.533522 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.533530 | orchestrator | 2025-06-03 15:40:20.533538 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-03 15:40:20.533547 | orchestrator | Tuesday 03 June 2025 15:34:45 +0000 (0:00:03.151) 0:05:32.852 ********** 2025-06-03 15:40:20.533598 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-03 15:40:20.533608 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-03 15:40:20.533650 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-03 15:40:20.533661 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:40:20.533669 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-03 15:40:20.533677 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-03 15:40:20.533685 | orchestrator | 2025-06-03 15:40:20.533693 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-03 15:40:20.533701 | orchestrator | Tuesday 03 June 2025 15:34:46 +0000 (0:00:01.344) 0:05:34.197 ********** 2025-06-03 15:40:20.533710 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.533718 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.533726 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.533734 | orchestrator | 2025-06-03 15:40:20.533741 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-03 15:40:20.533749 | orchestrator | Tuesday 03 June 2025 15:34:47 +0000 (0:00:00.744) 0:05:34.942 ********** 2025-06-03 15:40:20.533757 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.533765 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.533774 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.533782 | orchestrator | 2025-06-03 15:40:20.533790 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-03 15:40:20.533799 | orchestrator | Tuesday 03 June 2025 15:34:47 +0000 (0:00:00.315) 0:05:35.257 ********** 2025-06-03 15:40:20.533807 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.533816 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.533824 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.533832 | orchestrator | 2025-06-03 15:40:20.533840 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-03 15:40:20.533861 | orchestrator | Tuesday 03 June 2025 15:34:48 +0000 (0:00:00.320) 0:05:35.578 ********** 2025-06-03 15:40:20.533869 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.533876 | orchestrator | 2025-06-03 15:40:20.533884 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-03 15:40:20.533892 | orchestrator | Tuesday 03 June 2025 15:34:49 +0000 (0:00:00.898) 0:05:36.477 ********** 2025-06-03 15:40:20.533900 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.533908 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.533916 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.533925 | orchestrator | 2025-06-03 15:40:20.533933 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-03 15:40:20.533941 | orchestrator | Tuesday 03 June 2025 15:34:49 +0000 (0:00:00.325) 0:05:36.802 ********** 2025-06-03 15:40:20.533950 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.533958 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.533967 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.533975 | orchestrator | 2025-06-03 15:40:20.533983 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-03 15:40:20.533992 | orchestrator | Tuesday 03 June 2025 15:34:49 +0000 (0:00:00.321) 0:05:37.124 ********** 2025-06-03 15:40:20.534001 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.534009 | orchestrator | 2025-06-03 15:40:20.534039 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-03 15:40:20.534049 | orchestrator | Tuesday 03 June 2025 15:34:50 +0000 (0:00:00.814) 0:05:37.938 ********** 2025-06-03 15:40:20.534058 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.534066 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.534074 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.534082 | orchestrator | 2025-06-03 15:40:20.534091 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-03 15:40:20.534099 | orchestrator | Tuesday 03 June 2025 15:34:51 +0000 (0:00:01.249) 0:05:39.188 ********** 2025-06-03 15:40:20.534107 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.534115 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.534123 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.534131 | orchestrator | 2025-06-03 15:40:20.534140 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-03 15:40:20.534148 | orchestrator | Tuesday 03 June 2025 15:34:53 +0000 (0:00:01.172) 0:05:40.360 ********** 2025-06-03 15:40:20.534156 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.534164 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.534173 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.534182 | orchestrator | 2025-06-03 15:40:20.534190 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-03 15:40:20.534198 | orchestrator | Tuesday 03 June 2025 15:34:55 +0000 (0:00:02.227) 0:05:42.587 ********** 2025-06-03 15:40:20.534206 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.534214 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.534221 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.534229 | orchestrator | 2025-06-03 15:40:20.534237 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-03 15:40:20.534249 | orchestrator | Tuesday 03 June 2025 15:34:57 +0000 (0:00:02.006) 0:05:44.593 ********** 2025-06-03 15:40:20.534258 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.534266 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.534274 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-03 15:40:20.534283 | orchestrator | 2025-06-03 15:40:20.534291 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-03 15:40:20.534300 | orchestrator | Tuesday 03 June 2025 15:34:57 +0000 (0:00:00.447) 0:05:45.040 ********** 2025-06-03 15:40:20.534314 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-03 15:40:20.534323 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-03 15:40:20.534331 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-03 15:40:20.534366 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-03 15:40:20.534377 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-03 15:40:20.534385 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:20.534394 | orchestrator | 2025-06-03 15:40:20.534403 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-03 15:40:20.534411 | orchestrator | Tuesday 03 June 2025 15:35:28 +0000 (0:00:30.725) 0:06:15.766 ********** 2025-06-03 15:40:20.534419 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:20.534428 | orchestrator | 2025-06-03 15:40:20.534436 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-03 15:40:20.534444 | orchestrator | Tuesday 03 June 2025 15:35:29 +0000 (0:00:01.451) 0:06:17.218 ********** 2025-06-03 15:40:20.534452 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.534460 | orchestrator | 2025-06-03 15:40:20.534469 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-03 15:40:20.534477 | orchestrator | Tuesday 03 June 2025 15:35:30 +0000 (0:00:00.680) 0:06:17.899 ********** 2025-06-03 15:40:20.534485 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.534493 | orchestrator | 2025-06-03 15:40:20.534501 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-03 15:40:20.534510 | orchestrator | Tuesday 03 June 2025 15:35:30 +0000 (0:00:00.119) 0:06:18.018 ********** 2025-06-03 15:40:20.534518 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-03 15:40:20.534527 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-03 15:40:20.534535 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-03 15:40:20.534544 | orchestrator | 2025-06-03 15:40:20.534566 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-03 15:40:20.534575 | orchestrator | Tuesday 03 June 2025 15:35:38 +0000 (0:00:07.705) 0:06:25.724 ********** 2025-06-03 15:40:20.534583 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-03 15:40:20.534592 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-03 15:40:20.534599 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-03 15:40:20.534607 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-03 15:40:20.534615 | orchestrator | 2025-06-03 15:40:20.534623 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:20.534632 | orchestrator | Tuesday 03 June 2025 15:35:43 +0000 (0:00:05.105) 0:06:30.829 ********** 2025-06-03 15:40:20.534640 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.534648 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.534657 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.534665 | orchestrator | 2025-06-03 15:40:20.534673 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-03 15:40:20.534681 | orchestrator | Tuesday 03 June 2025 15:35:44 +0000 (0:00:00.981) 0:06:31.811 ********** 2025-06-03 15:40:20.534690 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:20.534697 | orchestrator | 2025-06-03 15:40:20.534706 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-03 15:40:20.534714 | orchestrator | Tuesday 03 June 2025 15:35:45 +0000 (0:00:00.607) 0:06:32.418 ********** 2025-06-03 15:40:20.534769 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.534779 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.534787 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.534795 | orchestrator | 2025-06-03 15:40:20.534803 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-03 15:40:20.534811 | orchestrator | Tuesday 03 June 2025 15:35:45 +0000 (0:00:00.344) 0:06:32.763 ********** 2025-06-03 15:40:20.534819 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.534827 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.534836 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.534843 | orchestrator | 2025-06-03 15:40:20.534851 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-03 15:40:20.534859 | orchestrator | Tuesday 03 June 2025 15:35:46 +0000 (0:00:01.427) 0:06:34.191 ********** 2025-06-03 15:40:20.534866 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:20.534875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:20.534883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:20.534891 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.534899 | orchestrator | 2025-06-03 15:40:20.534907 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-03 15:40:20.534920 | orchestrator | Tuesday 03 June 2025 15:35:47 +0000 (0:00:00.761) 0:06:34.953 ********** 2025-06-03 15:40:20.534929 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.534937 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.534946 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.534955 | orchestrator | 2025-06-03 15:40:20.534963 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-03 15:40:20.534971 | orchestrator | 2025-06-03 15:40:20.534979 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:20.534987 | orchestrator | Tuesday 03 June 2025 15:35:48 +0000 (0:00:00.653) 0:06:35.607 ********** 2025-06-03 15:40:20.534995 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.535004 | orchestrator | 2025-06-03 15:40:20.535011 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:20.535020 | orchestrator | Tuesday 03 June 2025 15:35:49 +0000 (0:00:00.752) 0:06:36.359 ********** 2025-06-03 15:40:20.535059 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.535070 | orchestrator | 2025-06-03 15:40:20.535078 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:20.535086 | orchestrator | Tuesday 03 June 2025 15:35:49 +0000 (0:00:00.473) 0:06:36.832 ********** 2025-06-03 15:40:20.535094 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.535102 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.535110 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.535118 | orchestrator | 2025-06-03 15:40:20.535126 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:20.535134 | orchestrator | Tuesday 03 June 2025 15:35:49 +0000 (0:00:00.260) 0:06:37.092 ********** 2025-06-03 15:40:20.535142 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.535150 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.535158 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.535166 | orchestrator | 2025-06-03 15:40:20.535174 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:20.535183 | orchestrator | Tuesday 03 June 2025 15:35:50 +0000 (0:00:00.904) 0:06:37.997 ********** 2025-06-03 15:40:20.535192 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.535200 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.535208 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.535216 | orchestrator | 2025-06-03 15:40:20.535224 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:20.535240 | orchestrator | Tuesday 03 June 2025 15:35:51 +0000 (0:00:00.723) 0:06:38.721 ********** 2025-06-03 15:40:20.535248 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.535257 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.535265 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.535274 | orchestrator | 2025-06-03 15:40:20.535283 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:20.535291 | orchestrator | Tuesday 03 June 2025 15:35:52 +0000 (0:00:00.779) 0:06:39.500 ********** 2025-06-03 15:40:20.535299 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.535307 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.535316 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.535324 | orchestrator | 2025-06-03 15:40:20.535332 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:20.535341 | orchestrator | Tuesday 03 June 2025 15:35:52 +0000 (0:00:00.358) 0:06:39.858 ********** 2025-06-03 15:40:20.535349 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.535357 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.535365 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.535374 | orchestrator | 2025-06-03 15:40:20.535382 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:20.535391 | orchestrator | Tuesday 03 June 2025 15:35:53 +0000 (0:00:00.530) 0:06:40.389 ********** 2025-06-03 15:40:20.535399 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.535407 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.535415 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.535423 | orchestrator | 2025-06-03 15:40:20.535431 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:20.535439 | orchestrator | Tuesday 03 June 2025 15:35:53 +0000 (0:00:00.311) 0:06:40.701 ********** 2025-06-03 15:40:20.535448 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.535456 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.535465 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.535473 | orchestrator | 2025-06-03 15:40:20.535481 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:20.535489 | orchestrator | Tuesday 03 June 2025 15:35:54 +0000 (0:00:00.732) 0:06:41.434 ********** 2025-06-03 15:40:20.535498 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.535507 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.535515 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.535523 | orchestrator | 2025-06-03 15:40:20.535532 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:20.535539 | orchestrator | Tuesday 03 June 2025 15:35:54 +0000 (0:00:00.686) 0:06:42.120 ********** 2025-06-03 15:40:20.535547 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.535568 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.535577 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.535585 | orchestrator | 2025-06-03 15:40:20.535593 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:20.535601 | orchestrator | Tuesday 03 June 2025 15:35:55 +0000 (0:00:00.609) 0:06:42.730 ********** 2025-06-03 15:40:20.535609 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.535617 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.535625 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.535634 | orchestrator | 2025-06-03 15:40:20.535642 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:20.535650 | orchestrator | Tuesday 03 June 2025 15:35:55 +0000 (0:00:00.326) 0:06:43.056 ********** 2025-06-03 15:40:20.535658 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.535667 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.535676 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.535684 | orchestrator | 2025-06-03 15:40:20.535693 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:20.535706 | orchestrator | Tuesday 03 June 2025 15:35:56 +0000 (0:00:00.317) 0:06:43.374 ********** 2025-06-03 15:40:20.535721 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.535730 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.535738 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.535747 | orchestrator | 2025-06-03 15:40:20.535755 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:20.535764 | orchestrator | Tuesday 03 June 2025 15:35:56 +0000 (0:00:00.341) 0:06:43.716 ********** 2025-06-03 15:40:20.535772 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.535781 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.535789 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.535797 | orchestrator | 2025-06-03 15:40:20.535806 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:20.535814 | orchestrator | Tuesday 03 June 2025 15:35:57 +0000 (0:00:00.658) 0:06:44.374 ********** 2025-06-03 15:40:20.535822 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.535831 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.535849 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.535858 | orchestrator | 2025-06-03 15:40:20.535872 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:20.535880 | orchestrator | Tuesday 03 June 2025 15:35:57 +0000 (0:00:00.329) 0:06:44.703 ********** 2025-06-03 15:40:20.535889 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.535897 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.535905 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.535913 | orchestrator | 2025-06-03 15:40:20.535922 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:20.535930 | orchestrator | Tuesday 03 June 2025 15:35:57 +0000 (0:00:00.311) 0:06:45.015 ********** 2025-06-03 15:40:20.535939 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.535947 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.535955 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.535963 | orchestrator | 2025-06-03 15:40:20.535971 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:20.535979 | orchestrator | Tuesday 03 June 2025 15:35:58 +0000 (0:00:00.356) 0:06:45.372 ********** 2025-06-03 15:40:20.535987 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.535995 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.536003 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.536011 | orchestrator | 2025-06-03 15:40:20.536020 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:20.536028 | orchestrator | Tuesday 03 June 2025 15:35:58 +0000 (0:00:00.648) 0:06:46.020 ********** 2025-06-03 15:40:20.536036 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.536044 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.536052 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.536060 | orchestrator | 2025-06-03 15:40:20.536069 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-03 15:40:20.536076 | orchestrator | Tuesday 03 June 2025 15:35:59 +0000 (0:00:00.549) 0:06:46.570 ********** 2025-06-03 15:40:20.536084 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.536092 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.536100 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.536108 | orchestrator | 2025-06-03 15:40:20.536116 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-03 15:40:20.536123 | orchestrator | Tuesday 03 June 2025 15:35:59 +0000 (0:00:00.370) 0:06:46.940 ********** 2025-06-03 15:40:20.536130 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-03 15:40:20.536138 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:20.536145 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:20.536153 | orchestrator | 2025-06-03 15:40:20.536161 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-03 15:40:20.536169 | orchestrator | Tuesday 03 June 2025 15:36:00 +0000 (0:00:00.982) 0:06:47.923 ********** 2025-06-03 15:40:20.536181 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.536189 | orchestrator | 2025-06-03 15:40:20.536196 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-03 15:40:20.536204 | orchestrator | Tuesday 03 June 2025 15:36:01 +0000 (0:00:00.783) 0:06:48.707 ********** 2025-06-03 15:40:20.536211 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.536218 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.536225 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.536232 | orchestrator | 2025-06-03 15:40:20.536239 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-03 15:40:20.536247 | orchestrator | Tuesday 03 June 2025 15:36:01 +0000 (0:00:00.298) 0:06:49.005 ********** 2025-06-03 15:40:20.536254 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.536261 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.536268 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.536275 | orchestrator | 2025-06-03 15:40:20.536282 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-03 15:40:20.536290 | orchestrator | Tuesday 03 June 2025 15:36:02 +0000 (0:00:00.342) 0:06:49.348 ********** 2025-06-03 15:40:20.536297 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.536304 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.536311 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.536319 | orchestrator | 2025-06-03 15:40:20.536326 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-03 15:40:20.536334 | orchestrator | Tuesday 03 June 2025 15:36:03 +0000 (0:00:00.979) 0:06:50.327 ********** 2025-06-03 15:40:20.536342 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.536350 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.536357 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.536365 | orchestrator | 2025-06-03 15:40:20.536373 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-03 15:40:20.536380 | orchestrator | Tuesday 03 June 2025 15:36:03 +0000 (0:00:00.388) 0:06:50.715 ********** 2025-06-03 15:40:20.536388 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-03 15:40:20.536400 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-03 15:40:20.536408 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-03 15:40:20.536416 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-03 15:40:20.536423 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-03 15:40:20.536431 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-03 15:40:20.536438 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-03 15:40:20.536445 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-03 15:40:20.536459 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-03 15:40:20.536467 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-03 15:40:20.536474 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-03 15:40:20.536481 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-03 15:40:20.536489 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-03 15:40:20.536496 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-03 15:40:20.536503 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-03 15:40:20.536511 | orchestrator | 2025-06-03 15:40:20.536524 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-03 15:40:20.536531 | orchestrator | Tuesday 03 June 2025 15:36:06 +0000 (0:00:03.273) 0:06:53.988 ********** 2025-06-03 15:40:20.536539 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.536546 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.536588 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.536596 | orchestrator | 2025-06-03 15:40:20.536604 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-03 15:40:20.536611 | orchestrator | Tuesday 03 June 2025 15:36:06 +0000 (0:00:00.262) 0:06:54.251 ********** 2025-06-03 15:40:20.536618 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.536625 | orchestrator | 2025-06-03 15:40:20.536633 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-03 15:40:20.536640 | orchestrator | Tuesday 03 June 2025 15:36:07 +0000 (0:00:00.652) 0:06:54.903 ********** 2025-06-03 15:40:20.536648 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-03 15:40:20.536656 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-03 15:40:20.536663 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-03 15:40:20.536670 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-03 15:40:20.536677 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-03 15:40:20.536684 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-03 15:40:20.536691 | orchestrator | 2025-06-03 15:40:20.536698 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-03 15:40:20.536706 | orchestrator | Tuesday 03 June 2025 15:36:08 +0000 (0:00:00.921) 0:06:55.825 ********** 2025-06-03 15:40:20.536714 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.536722 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:20.536729 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:20.536737 | orchestrator | 2025-06-03 15:40:20.536744 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-03 15:40:20.536752 | orchestrator | Tuesday 03 June 2025 15:36:10 +0000 (0:00:02.190) 0:06:58.016 ********** 2025-06-03 15:40:20.536759 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:40:20.536767 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:20.536774 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.536782 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:40:20.536789 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-03 15:40:20.536796 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.536803 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:40:20.536811 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-03 15:40:20.536818 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.536826 | orchestrator | 2025-06-03 15:40:20.536834 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-03 15:40:20.536841 | orchestrator | Tuesday 03 June 2025 15:36:11 +0000 (0:00:01.285) 0:06:59.302 ********** 2025-06-03 15:40:20.536848 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:20.536856 | orchestrator | 2025-06-03 15:40:20.536863 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-03 15:40:20.536871 | orchestrator | Tuesday 03 June 2025 15:36:14 +0000 (0:00:02.273) 0:07:01.575 ********** 2025-06-03 15:40:20.536878 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.536885 | orchestrator | 2025-06-03 15:40:20.536893 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-03 15:40:20.536900 | orchestrator | Tuesday 03 June 2025 15:36:14 +0000 (0:00:00.521) 0:07:02.096 ********** 2025-06-03 15:40:20.536917 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-53b632c4-9781-517b-ad8e-3b37c9789a01', 'data_vg': 'ceph-53b632c4-9781-517b-ad8e-3b37c9789a01'}) 2025-06-03 15:40:20.536926 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8e839e97-cc3d-5431-ae91-f94b997cade9', 'data_vg': 'ceph-8e839e97-cc3d-5431-ae91-f94b997cade9'}) 2025-06-03 15:40:20.536934 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a5276575-f764-5428-894d-d125091c496f', 'data_vg': 'ceph-a5276575-f764-5428-894d-d125091c496f'}) 2025-06-03 15:40:20.536942 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1191cd60-4b8c-5454-8e42-9818af3c2595', 'data_vg': 'ceph-1191cd60-4b8c-5454-8e42-9818af3c2595'}) 2025-06-03 15:40:20.536955 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5', 'data_vg': 'ceph-ba1ebe02-3aa8-524d-8f69-e3cc70944ba5'}) 2025-06-03 15:40:20.536962 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6a443cc3-e60d-5588-869b-39e93dfe07d6', 'data_vg': 'ceph-6a443cc3-e60d-5588-869b-39e93dfe07d6'}) 2025-06-03 15:40:20.536969 | orchestrator | 2025-06-03 15:40:20.536976 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-03 15:40:20.536984 | orchestrator | Tuesday 03 June 2025 15:36:54 +0000 (0:00:39.994) 0:07:42.091 ********** 2025-06-03 15:40:20.536992 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.536999 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.537007 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.537014 | orchestrator | 2025-06-03 15:40:20.537021 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-03 15:40:20.537028 | orchestrator | Tuesday 03 June 2025 15:36:55 +0000 (0:00:00.438) 0:07:42.529 ********** 2025-06-03 15:40:20.537035 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.537041 | orchestrator | 2025-06-03 15:40:20.537049 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-03 15:40:20.537056 | orchestrator | Tuesday 03 June 2025 15:36:55 +0000 (0:00:00.520) 0:07:43.050 ********** 2025-06-03 15:40:20.537064 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.537072 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.537080 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.537087 | orchestrator | 2025-06-03 15:40:20.537094 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-03 15:40:20.537102 | orchestrator | Tuesday 03 June 2025 15:36:56 +0000 (0:00:00.582) 0:07:43.632 ********** 2025-06-03 15:40:20.537110 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.537118 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.537125 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.537133 | orchestrator | 2025-06-03 15:40:20.537140 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-03 15:40:20.537148 | orchestrator | Tuesday 03 June 2025 15:36:59 +0000 (0:00:02.875) 0:07:46.508 ********** 2025-06-03 15:40:20.537155 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.537163 | orchestrator | 2025-06-03 15:40:20.537171 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-03 15:40:20.537178 | orchestrator | Tuesday 03 June 2025 15:36:59 +0000 (0:00:00.531) 0:07:47.040 ********** 2025-06-03 15:40:20.537185 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.537192 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.537199 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.537206 | orchestrator | 2025-06-03 15:40:20.537213 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-03 15:40:20.537220 | orchestrator | Tuesday 03 June 2025 15:37:00 +0000 (0:00:01.216) 0:07:48.256 ********** 2025-06-03 15:40:20.537227 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.537234 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.537250 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.537257 | orchestrator | 2025-06-03 15:40:20.537265 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-03 15:40:20.537273 | orchestrator | Tuesday 03 June 2025 15:37:02 +0000 (0:00:01.250) 0:07:49.507 ********** 2025-06-03 15:40:20.537281 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.537288 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.537296 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.537304 | orchestrator | 2025-06-03 15:40:20.537311 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-03 15:40:20.537319 | orchestrator | Tuesday 03 June 2025 15:37:04 +0000 (0:00:01.849) 0:07:51.357 ********** 2025-06-03 15:40:20.537326 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.537333 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.537341 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.537348 | orchestrator | 2025-06-03 15:40:20.537355 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-03 15:40:20.537362 | orchestrator | Tuesday 03 June 2025 15:37:04 +0000 (0:00:00.301) 0:07:51.659 ********** 2025-06-03 15:40:20.537369 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.537376 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.537383 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.537391 | orchestrator | 2025-06-03 15:40:20.537398 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-03 15:40:20.537406 | orchestrator | Tuesday 03 June 2025 15:37:04 +0000 (0:00:00.269) 0:07:51.928 ********** 2025-06-03 15:40:20.537413 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-03 15:40:20.537421 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-03 15:40:20.537429 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-06-03 15:40:20.537437 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-06-03 15:40:20.537444 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-06-03 15:40:20.537452 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-03 15:40:20.537460 | orchestrator | 2025-06-03 15:40:20.537467 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-03 15:40:20.537475 | orchestrator | Tuesday 03 June 2025 15:37:05 +0000 (0:00:01.277) 0:07:53.205 ********** 2025-06-03 15:40:20.537493 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-03 15:40:20.537500 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-03 15:40:20.537508 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-03 15:40:20.537515 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-03 15:40:20.537522 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-03 15:40:20.537529 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-03 15:40:20.537536 | orchestrator | 2025-06-03 15:40:20.537543 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-03 15:40:20.537550 | orchestrator | Tuesday 03 June 2025 15:37:07 +0000 (0:00:02.046) 0:07:55.252 ********** 2025-06-03 15:40:20.537572 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-03 15:40:20.537579 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-03 15:40:20.537593 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-03 15:40:20.537601 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-03 15:40:20.537609 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-03 15:40:20.537616 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-03 15:40:20.537624 | orchestrator | 2025-06-03 15:40:20.537632 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-03 15:40:20.537640 | orchestrator | Tuesday 03 June 2025 15:37:11 +0000 (0:00:03.819) 0:07:59.071 ********** 2025-06-03 15:40:20.537648 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.537655 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.537663 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:20.537671 | orchestrator | 2025-06-03 15:40:20.537678 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-03 15:40:20.537691 | orchestrator | Tuesday 03 June 2025 15:37:14 +0000 (0:00:02.756) 0:08:01.827 ********** 2025-06-03 15:40:20.537699 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.537708 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.537716 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-03 15:40:20.537724 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:20.537732 | orchestrator | 2025-06-03 15:40:20.537739 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-03 15:40:20.537747 | orchestrator | Tuesday 03 June 2025 15:37:27 +0000 (0:00:13.161) 0:08:14.988 ********** 2025-06-03 15:40:20.537756 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.537763 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.537771 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.537779 | orchestrator | 2025-06-03 15:40:20.537787 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:20.537795 | orchestrator | Tuesday 03 June 2025 15:37:28 +0000 (0:00:00.846) 0:08:15.835 ********** 2025-06-03 15:40:20.537803 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.537810 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.537818 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.537826 | orchestrator | 2025-06-03 15:40:20.537833 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-03 15:40:20.537841 | orchestrator | Tuesday 03 June 2025 15:37:29 +0000 (0:00:00.648) 0:08:16.484 ********** 2025-06-03 15:40:20.537849 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.537856 | orchestrator | 2025-06-03 15:40:20.537864 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-03 15:40:20.537872 | orchestrator | Tuesday 03 June 2025 15:37:29 +0000 (0:00:00.594) 0:08:17.078 ********** 2025-06-03 15:40:20.537956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.537981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.537989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.537997 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538004 | orchestrator | 2025-06-03 15:40:20.538010 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-03 15:40:20.538058 | orchestrator | Tuesday 03 June 2025 15:37:30 +0000 (0:00:00.410) 0:08:17.488 ********** 2025-06-03 15:40:20.538067 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538074 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.538080 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.538088 | orchestrator | 2025-06-03 15:40:20.538095 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-03 15:40:20.538102 | orchestrator | Tuesday 03 June 2025 15:37:30 +0000 (0:00:00.331) 0:08:17.820 ********** 2025-06-03 15:40:20.538110 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538117 | orchestrator | 2025-06-03 15:40:20.538124 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-03 15:40:20.538130 | orchestrator | Tuesday 03 June 2025 15:37:30 +0000 (0:00:00.231) 0:08:18.052 ********** 2025-06-03 15:40:20.538137 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538144 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.538150 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.538157 | orchestrator | 2025-06-03 15:40:20.538162 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-03 15:40:20.538166 | orchestrator | Tuesday 03 June 2025 15:37:31 +0000 (0:00:00.563) 0:08:18.616 ********** 2025-06-03 15:40:20.538170 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538174 | orchestrator | 2025-06-03 15:40:20.538178 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-03 15:40:20.538187 | orchestrator | Tuesday 03 June 2025 15:37:31 +0000 (0:00:00.233) 0:08:18.849 ********** 2025-06-03 15:40:20.538191 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538195 | orchestrator | 2025-06-03 15:40:20.538202 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-03 15:40:20.538206 | orchestrator | Tuesday 03 June 2025 15:37:31 +0000 (0:00:00.232) 0:08:19.082 ********** 2025-06-03 15:40:20.538210 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538214 | orchestrator | 2025-06-03 15:40:20.538218 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-03 15:40:20.538225 | orchestrator | Tuesday 03 June 2025 15:37:31 +0000 (0:00:00.142) 0:08:19.225 ********** 2025-06-03 15:40:20.538232 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538238 | orchestrator | 2025-06-03 15:40:20.538245 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-03 15:40:20.538251 | orchestrator | Tuesday 03 June 2025 15:37:32 +0000 (0:00:00.225) 0:08:19.450 ********** 2025-06-03 15:40:20.538258 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538264 | orchestrator | 2025-06-03 15:40:20.538271 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-03 15:40:20.538278 | orchestrator | Tuesday 03 June 2025 15:37:32 +0000 (0:00:00.208) 0:08:19.659 ********** 2025-06-03 15:40:20.538292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.538299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.538305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.538311 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538318 | orchestrator | 2025-06-03 15:40:20.538324 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-03 15:40:20.538330 | orchestrator | Tuesday 03 June 2025 15:37:32 +0000 (0:00:00.379) 0:08:20.039 ********** 2025-06-03 15:40:20.538337 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538343 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.538349 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.538356 | orchestrator | 2025-06-03 15:40:20.538363 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-03 15:40:20.538369 | orchestrator | Tuesday 03 June 2025 15:37:33 +0000 (0:00:00.289) 0:08:20.328 ********** 2025-06-03 15:40:20.538376 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538380 | orchestrator | 2025-06-03 15:40:20.538384 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-03 15:40:20.538388 | orchestrator | Tuesday 03 June 2025 15:37:33 +0000 (0:00:00.799) 0:08:21.127 ********** 2025-06-03 15:40:20.538392 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538395 | orchestrator | 2025-06-03 15:40:20.538399 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-03 15:40:20.538403 | orchestrator | 2025-06-03 15:40:20.538407 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:20.538411 | orchestrator | Tuesday 03 June 2025 15:37:34 +0000 (0:00:00.678) 0:08:21.805 ********** 2025-06-03 15:40:20.538415 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.538420 | orchestrator | 2025-06-03 15:40:20.538424 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:20.538428 | orchestrator | Tuesday 03 June 2025 15:37:35 +0000 (0:00:01.213) 0:08:23.019 ********** 2025-06-03 15:40:20.538432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.538436 | orchestrator | 2025-06-03 15:40:20.538440 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:20.538444 | orchestrator | Tuesday 03 June 2025 15:37:36 +0000 (0:00:01.201) 0:08:24.221 ********** 2025-06-03 15:40:20.538451 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538455 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.538459 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.538463 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.538467 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.538471 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.538475 | orchestrator | 2025-06-03 15:40:20.538479 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:20.538483 | orchestrator | Tuesday 03 June 2025 15:37:37 +0000 (0:00:00.954) 0:08:25.176 ********** 2025-06-03 15:40:20.538487 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.538490 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.538494 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.538498 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.538502 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.538506 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.538510 | orchestrator | 2025-06-03 15:40:20.538514 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:20.538518 | orchestrator | Tuesday 03 June 2025 15:37:38 +0000 (0:00:01.095) 0:08:26.272 ********** 2025-06-03 15:40:20.538521 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.538525 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.538529 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.538533 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.538537 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.538541 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.538544 | orchestrator | 2025-06-03 15:40:20.538548 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:20.538566 | orchestrator | Tuesday 03 June 2025 15:37:40 +0000 (0:00:01.298) 0:08:27.571 ********** 2025-06-03 15:40:20.538571 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.538575 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.538579 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.538582 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.538586 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.538590 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.538594 | orchestrator | 2025-06-03 15:40:20.538598 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:20.538602 | orchestrator | Tuesday 03 June 2025 15:37:41 +0000 (0:00:01.130) 0:08:28.702 ********** 2025-06-03 15:40:20.538606 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538612 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.538616 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.538620 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.538624 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.538628 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.538632 | orchestrator | 2025-06-03 15:40:20.538636 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:20.538640 | orchestrator | Tuesday 03 June 2025 15:37:42 +0000 (0:00:00.851) 0:08:29.553 ********** 2025-06-03 15:40:20.538643 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.538647 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.538651 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.538655 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538659 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.538663 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.538667 | orchestrator | 2025-06-03 15:40:20.538671 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:20.538675 | orchestrator | Tuesday 03 June 2025 15:37:42 +0000 (0:00:00.579) 0:08:30.132 ********** 2025-06-03 15:40:20.538681 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.538685 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.538689 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.538693 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538700 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.538704 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.538708 | orchestrator | 2025-06-03 15:40:20.538712 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:20.538716 | orchestrator | Tuesday 03 June 2025 15:37:43 +0000 (0:00:00.811) 0:08:30.944 ********** 2025-06-03 15:40:20.538719 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.538723 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.538727 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.538731 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.538735 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.538739 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.538742 | orchestrator | 2025-06-03 15:40:20.538746 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:20.538750 | orchestrator | Tuesday 03 June 2025 15:37:44 +0000 (0:00:01.064) 0:08:32.008 ********** 2025-06-03 15:40:20.538754 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.538758 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.538762 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.538766 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.538770 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.538774 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.538777 | orchestrator | 2025-06-03 15:40:20.538781 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:20.538785 | orchestrator | Tuesday 03 June 2025 15:37:46 +0000 (0:00:01.615) 0:08:33.624 ********** 2025-06-03 15:40:20.538789 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.538793 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.538797 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.538801 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538805 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.538809 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.538812 | orchestrator | 2025-06-03 15:40:20.538816 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:20.538820 | orchestrator | Tuesday 03 June 2025 15:37:46 +0000 (0:00:00.676) 0:08:34.301 ********** 2025-06-03 15:40:20.538824 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.538828 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.538832 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.538836 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538840 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.538843 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.538847 | orchestrator | 2025-06-03 15:40:20.538851 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:20.538855 | orchestrator | Tuesday 03 June 2025 15:37:47 +0000 (0:00:00.842) 0:08:35.143 ********** 2025-06-03 15:40:20.538859 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.538863 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.538867 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.538871 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.538875 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.538878 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.538882 | orchestrator | 2025-06-03 15:40:20.538886 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:20.538890 | orchestrator | Tuesday 03 June 2025 15:37:48 +0000 (0:00:00.596) 0:08:35.740 ********** 2025-06-03 15:40:20.538894 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.538898 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.538902 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.538906 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.538909 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.538913 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.538917 | orchestrator | 2025-06-03 15:40:20.538921 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:20.538928 | orchestrator | Tuesday 03 June 2025 15:37:49 +0000 (0:00:00.868) 0:08:36.609 ********** 2025-06-03 15:40:20.538931 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.538935 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.538939 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.538943 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.538947 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.538951 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.538955 | orchestrator | 2025-06-03 15:40:20.538959 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:20.538963 | orchestrator | Tuesday 03 June 2025 15:37:49 +0000 (0:00:00.634) 0:08:37.244 ********** 2025-06-03 15:40:20.538966 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.538970 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.538974 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.538978 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.538982 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.538986 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.538990 | orchestrator | 2025-06-03 15:40:20.538994 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:20.538998 | orchestrator | Tuesday 03 June 2025 15:37:50 +0000 (0:00:00.795) 0:08:38.040 ********** 2025-06-03 15:40:20.539004 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:20.539008 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:20.539011 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:20.539015 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539019 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539023 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539027 | orchestrator | 2025-06-03 15:40:20.539031 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:20.539035 | orchestrator | Tuesday 03 June 2025 15:37:51 +0000 (0:00:00.572) 0:08:38.612 ********** 2025-06-03 15:40:20.539038 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.539042 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.539046 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.539050 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539054 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539058 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539062 | orchestrator | 2025-06-03 15:40:20.539066 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:20.539072 | orchestrator | Tuesday 03 June 2025 15:37:52 +0000 (0:00:00.823) 0:08:39.436 ********** 2025-06-03 15:40:20.539076 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.539080 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.539084 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.539088 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539092 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539095 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539099 | orchestrator | 2025-06-03 15:40:20.539103 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:20.539107 | orchestrator | Tuesday 03 June 2025 15:37:52 +0000 (0:00:00.690) 0:08:40.126 ********** 2025-06-03 15:40:20.539111 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.539115 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.539119 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.539123 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539126 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539130 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539134 | orchestrator | 2025-06-03 15:40:20.539138 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-03 15:40:20.539142 | orchestrator | Tuesday 03 June 2025 15:37:54 +0000 (0:00:01.237) 0:08:41.363 ********** 2025-06-03 15:40:20.539146 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.539150 | orchestrator | 2025-06-03 15:40:20.539154 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-03 15:40:20.539160 | orchestrator | Tuesday 03 June 2025 15:37:58 +0000 (0:00:04.588) 0:08:45.952 ********** 2025-06-03 15:40:20.539164 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.539168 | orchestrator | 2025-06-03 15:40:20.539172 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-03 15:40:20.539176 | orchestrator | Tuesday 03 June 2025 15:38:00 +0000 (0:00:02.137) 0:08:48.090 ********** 2025-06-03 15:40:20.539180 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.539184 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.539188 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.539191 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.539195 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.539199 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.539203 | orchestrator | 2025-06-03 15:40:20.539207 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-03 15:40:20.539211 | orchestrator | Tuesday 03 June 2025 15:38:02 +0000 (0:00:01.690) 0:08:49.781 ********** 2025-06-03 15:40:20.539215 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.539218 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.539222 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.539226 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.539230 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.539234 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.539238 | orchestrator | 2025-06-03 15:40:20.539241 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-03 15:40:20.539245 | orchestrator | Tuesday 03 June 2025 15:38:03 +0000 (0:00:01.002) 0:08:50.783 ********** 2025-06-03 15:40:20.539250 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.539254 | orchestrator | 2025-06-03 15:40:20.539258 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-03 15:40:20.539262 | orchestrator | Tuesday 03 June 2025 15:38:04 +0000 (0:00:01.235) 0:08:52.019 ********** 2025-06-03 15:40:20.539266 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.539270 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.539274 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.539278 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.539282 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.539285 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.539289 | orchestrator | 2025-06-03 15:40:20.539293 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-03 15:40:20.539297 | orchestrator | Tuesday 03 June 2025 15:38:06 +0000 (0:00:02.103) 0:08:54.123 ********** 2025-06-03 15:40:20.539301 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.539305 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.539309 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.539313 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.539317 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.539320 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.539324 | orchestrator | 2025-06-03 15:40:20.539328 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-03 15:40:20.539332 | orchestrator | Tuesday 03 June 2025 15:38:10 +0000 (0:00:03.243) 0:08:57.366 ********** 2025-06-03 15:40:20.539336 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.539340 | orchestrator | 2025-06-03 15:40:20.539345 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-03 15:40:20.539352 | orchestrator | Tuesday 03 June 2025 15:38:11 +0000 (0:00:01.298) 0:08:58.664 ********** 2025-06-03 15:40:20.539358 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.539368 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.539374 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.539384 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539390 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539396 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539403 | orchestrator | 2025-06-03 15:40:20.539410 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-03 15:40:20.539416 | orchestrator | Tuesday 03 June 2025 15:38:12 +0000 (0:00:00.817) 0:08:59.482 ********** 2025-06-03 15:40:20.539423 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:20.539429 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:20.539432 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:20.539436 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.539440 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.539444 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.539448 | orchestrator | 2025-06-03 15:40:20.539452 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-03 15:40:20.539456 | orchestrator | Tuesday 03 June 2025 15:38:14 +0000 (0:00:02.086) 0:09:01.568 ********** 2025-06-03 15:40:20.539460 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:20.539467 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:20.539471 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:20.539475 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539479 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539483 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539487 | orchestrator | 2025-06-03 15:40:20.539491 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-03 15:40:20.539495 | orchestrator | 2025-06-03 15:40:20.539498 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:20.539502 | orchestrator | Tuesday 03 June 2025 15:38:15 +0000 (0:00:01.137) 0:09:02.706 ********** 2025-06-03 15:40:20.539506 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.539510 | orchestrator | 2025-06-03 15:40:20.539514 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:20.539518 | orchestrator | Tuesday 03 June 2025 15:38:15 +0000 (0:00:00.513) 0:09:03.220 ********** 2025-06-03 15:40:20.539522 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.539526 | orchestrator | 2025-06-03 15:40:20.539530 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:20.539534 | orchestrator | Tuesday 03 June 2025 15:38:16 +0000 (0:00:00.776) 0:09:03.997 ********** 2025-06-03 15:40:20.539538 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539542 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539546 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539550 | orchestrator | 2025-06-03 15:40:20.539566 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:20.539570 | orchestrator | Tuesday 03 June 2025 15:38:17 +0000 (0:00:00.325) 0:09:04.323 ********** 2025-06-03 15:40:20.539574 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539578 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539582 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539586 | orchestrator | 2025-06-03 15:40:20.539590 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:20.539594 | orchestrator | Tuesday 03 June 2025 15:38:17 +0000 (0:00:00.654) 0:09:04.978 ********** 2025-06-03 15:40:20.539598 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539602 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539606 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539610 | orchestrator | 2025-06-03 15:40:20.539614 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:20.539618 | orchestrator | Tuesday 03 June 2025 15:38:18 +0000 (0:00:00.980) 0:09:05.958 ********** 2025-06-03 15:40:20.539622 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539625 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539633 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539637 | orchestrator | 2025-06-03 15:40:20.539640 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:20.539644 | orchestrator | Tuesday 03 June 2025 15:38:19 +0000 (0:00:00.675) 0:09:06.633 ********** 2025-06-03 15:40:20.539648 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539652 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539656 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539660 | orchestrator | 2025-06-03 15:40:20.539664 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:20.539668 | orchestrator | Tuesday 03 June 2025 15:38:19 +0000 (0:00:00.348) 0:09:06.982 ********** 2025-06-03 15:40:20.539672 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539676 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539680 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539683 | orchestrator | 2025-06-03 15:40:20.539687 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:20.539691 | orchestrator | Tuesday 03 June 2025 15:38:19 +0000 (0:00:00.305) 0:09:07.288 ********** 2025-06-03 15:40:20.539695 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539699 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539703 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539707 | orchestrator | 2025-06-03 15:40:20.539711 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:20.539715 | orchestrator | Tuesday 03 June 2025 15:38:20 +0000 (0:00:00.609) 0:09:07.898 ********** 2025-06-03 15:40:20.539719 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539723 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539727 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539730 | orchestrator | 2025-06-03 15:40:20.539734 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:20.539738 | orchestrator | Tuesday 03 June 2025 15:38:21 +0000 (0:00:00.707) 0:09:08.606 ********** 2025-06-03 15:40:20.539742 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539746 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539750 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539754 | orchestrator | 2025-06-03 15:40:20.539758 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:20.539764 | orchestrator | Tuesday 03 June 2025 15:38:22 +0000 (0:00:00.722) 0:09:09.328 ********** 2025-06-03 15:40:20.539768 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539772 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539776 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539780 | orchestrator | 2025-06-03 15:40:20.539784 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:20.539788 | orchestrator | Tuesday 03 June 2025 15:38:22 +0000 (0:00:00.300) 0:09:09.628 ********** 2025-06-03 15:40:20.539792 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539796 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539799 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539803 | orchestrator | 2025-06-03 15:40:20.539807 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:20.539811 | orchestrator | Tuesday 03 June 2025 15:38:22 +0000 (0:00:00.571) 0:09:10.200 ********** 2025-06-03 15:40:20.539815 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539819 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539823 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539827 | orchestrator | 2025-06-03 15:40:20.539833 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:20.539837 | orchestrator | Tuesday 03 June 2025 15:38:23 +0000 (0:00:00.334) 0:09:10.535 ********** 2025-06-03 15:40:20.539841 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539845 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539849 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539853 | orchestrator | 2025-06-03 15:40:20.539860 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:20.539864 | orchestrator | Tuesday 03 June 2025 15:38:23 +0000 (0:00:00.354) 0:09:10.889 ********** 2025-06-03 15:40:20.539868 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539872 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539876 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539880 | orchestrator | 2025-06-03 15:40:20.539884 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:20.539887 | orchestrator | Tuesday 03 June 2025 15:38:23 +0000 (0:00:00.314) 0:09:11.203 ********** 2025-06-03 15:40:20.539891 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539895 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539899 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539903 | orchestrator | 2025-06-03 15:40:20.539907 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:20.539911 | orchestrator | Tuesday 03 June 2025 15:38:24 +0000 (0:00:00.586) 0:09:11.790 ********** 2025-06-03 15:40:20.539915 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539919 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539923 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539927 | orchestrator | 2025-06-03 15:40:20.539931 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:20.539935 | orchestrator | Tuesday 03 June 2025 15:38:24 +0000 (0:00:00.321) 0:09:12.111 ********** 2025-06-03 15:40:20.539939 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.539943 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.539947 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.539950 | orchestrator | 2025-06-03 15:40:20.539954 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:20.539958 | orchestrator | Tuesday 03 June 2025 15:38:25 +0000 (0:00:00.292) 0:09:12.403 ********** 2025-06-03 15:40:20.539962 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539966 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539970 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539974 | orchestrator | 2025-06-03 15:40:20.539978 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:20.539982 | orchestrator | Tuesday 03 June 2025 15:38:25 +0000 (0:00:00.346) 0:09:12.750 ********** 2025-06-03 15:40:20.539986 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.539990 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.539994 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.539998 | orchestrator | 2025-06-03 15:40:20.540001 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-03 15:40:20.540005 | orchestrator | Tuesday 03 June 2025 15:38:26 +0000 (0:00:00.777) 0:09:13.527 ********** 2025-06-03 15:40:20.540009 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.540013 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.540017 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-03 15:40:20.540021 | orchestrator | 2025-06-03 15:40:20.540025 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-03 15:40:20.540029 | orchestrator | Tuesday 03 June 2025 15:38:26 +0000 (0:00:00.351) 0:09:13.878 ********** 2025-06-03 15:40:20.540033 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:20.540037 | orchestrator | 2025-06-03 15:40:20.540041 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-03 15:40:20.540045 | orchestrator | Tuesday 03 June 2025 15:38:28 +0000 (0:00:02.202) 0:09:16.081 ********** 2025-06-03 15:40:20.540050 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-03 15:40:20.540055 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.540059 | orchestrator | 2025-06-03 15:40:20.540067 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-03 15:40:20.540071 | orchestrator | Tuesday 03 June 2025 15:38:28 +0000 (0:00:00.186) 0:09:16.267 ********** 2025-06-03 15:40:20.540076 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:40:20.540086 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:40:20.540091 | orchestrator | 2025-06-03 15:40:20.540095 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-03 15:40:20.540099 | orchestrator | Tuesday 03 June 2025 15:38:37 +0000 (0:00:08.930) 0:09:25.198 ********** 2025-06-03 15:40:20.540102 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:20.540106 | orchestrator | 2025-06-03 15:40:20.540110 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-03 15:40:20.540114 | orchestrator | Tuesday 03 June 2025 15:38:41 +0000 (0:00:03.676) 0:09:28.874 ********** 2025-06-03 15:40:20.540118 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.540122 | orchestrator | 2025-06-03 15:40:20.540128 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-03 15:40:20.540132 | orchestrator | Tuesday 03 June 2025 15:38:42 +0000 (0:00:00.640) 0:09:29.515 ********** 2025-06-03 15:40:20.540136 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-03 15:40:20.540140 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-03 15:40:20.540144 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-03 15:40:20.540148 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-03 15:40:20.540152 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-03 15:40:20.540156 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-03 15:40:20.540159 | orchestrator | 2025-06-03 15:40:20.540163 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-03 15:40:20.540167 | orchestrator | Tuesday 03 June 2025 15:38:43 +0000 (0:00:01.108) 0:09:30.623 ********** 2025-06-03 15:40:20.540171 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.540175 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:20.540179 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:20.540183 | orchestrator | 2025-06-03 15:40:20.540187 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-03 15:40:20.540191 | orchestrator | Tuesday 03 June 2025 15:38:45 +0000 (0:00:02.431) 0:09:33.055 ********** 2025-06-03 15:40:20.540195 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:40:20.540199 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:20.540203 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.540207 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:40:20.540211 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-03 15:40:20.540214 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.540218 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:40:20.540222 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-03 15:40:20.540226 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.540230 | orchestrator | 2025-06-03 15:40:20.540234 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-03 15:40:20.540241 | orchestrator | Tuesday 03 June 2025 15:38:47 +0000 (0:00:01.300) 0:09:34.356 ********** 2025-06-03 15:40:20.540245 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.540249 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.540253 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.540257 | orchestrator | 2025-06-03 15:40:20.540261 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-03 15:40:20.540265 | orchestrator | Tuesday 03 June 2025 15:38:49 +0000 (0:00:02.702) 0:09:37.058 ********** 2025-06-03 15:40:20.540269 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.540273 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.540277 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.540280 | orchestrator | 2025-06-03 15:40:20.540284 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-03 15:40:20.540288 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:00.311) 0:09:37.370 ********** 2025-06-03 15:40:20.540292 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.540296 | orchestrator | 2025-06-03 15:40:20.540300 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-03 15:40:20.540304 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:00.708) 0:09:38.078 ********** 2025-06-03 15:40:20.540308 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.540312 | orchestrator | 2025-06-03 15:40:20.540316 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-03 15:40:20.540320 | orchestrator | Tuesday 03 June 2025 15:38:51 +0000 (0:00:00.499) 0:09:38.578 ********** 2025-06-03 15:40:20.540324 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.540328 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.540332 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.540336 | orchestrator | 2025-06-03 15:40:20.540339 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-03 15:40:20.540343 | orchestrator | Tuesday 03 June 2025 15:38:52 +0000 (0:00:01.201) 0:09:39.779 ********** 2025-06-03 15:40:20.540347 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.540351 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.540355 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.540359 | orchestrator | 2025-06-03 15:40:20.540363 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-03 15:40:20.540367 | orchestrator | Tuesday 03 June 2025 15:38:53 +0000 (0:00:01.355) 0:09:41.135 ********** 2025-06-03 15:40:20.540373 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.540377 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.540381 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.540384 | orchestrator | 2025-06-03 15:40:20.540388 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-03 15:40:20.540392 | orchestrator | Tuesday 03 June 2025 15:38:55 +0000 (0:00:01.826) 0:09:42.961 ********** 2025-06-03 15:40:20.540396 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.540400 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.540404 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.540408 | orchestrator | 2025-06-03 15:40:20.540412 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-03 15:40:20.540416 | orchestrator | Tuesday 03 June 2025 15:38:57 +0000 (0:00:02.069) 0:09:45.031 ********** 2025-06-03 15:40:20.540420 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540424 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540427 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540431 | orchestrator | 2025-06-03 15:40:20.540437 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:20.540441 | orchestrator | Tuesday 03 June 2025 15:38:59 +0000 (0:00:01.722) 0:09:46.754 ********** 2025-06-03 15:40:20.540445 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.540452 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.540456 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.540460 | orchestrator | 2025-06-03 15:40:20.540464 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-03 15:40:20.540468 | orchestrator | Tuesday 03 June 2025 15:39:00 +0000 (0:00:00.675) 0:09:47.430 ********** 2025-06-03 15:40:20.540472 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.540476 | orchestrator | 2025-06-03 15:40:20.540480 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-03 15:40:20.540484 | orchestrator | Tuesday 03 June 2025 15:39:01 +0000 (0:00:01.125) 0:09:48.555 ********** 2025-06-03 15:40:20.540488 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540491 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540495 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540499 | orchestrator | 2025-06-03 15:40:20.540503 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-03 15:40:20.540507 | orchestrator | Tuesday 03 June 2025 15:39:01 +0000 (0:00:00.416) 0:09:48.972 ********** 2025-06-03 15:40:20.540511 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.540515 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.540519 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.540523 | orchestrator | 2025-06-03 15:40:20.540527 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-03 15:40:20.540531 | orchestrator | Tuesday 03 June 2025 15:39:02 +0000 (0:00:01.293) 0:09:50.265 ********** 2025-06-03 15:40:20.540535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.540538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.540542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.540546 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.540550 | orchestrator | 2025-06-03 15:40:20.540583 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-03 15:40:20.540587 | orchestrator | Tuesday 03 June 2025 15:39:04 +0000 (0:00:01.149) 0:09:51.415 ********** 2025-06-03 15:40:20.540591 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540595 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540599 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540603 | orchestrator | 2025-06-03 15:40:20.540607 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-03 15:40:20.540611 | orchestrator | 2025-06-03 15:40:20.540615 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:20.540619 | orchestrator | Tuesday 03 June 2025 15:39:04 +0000 (0:00:00.829) 0:09:52.244 ********** 2025-06-03 15:40:20.540623 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.540627 | orchestrator | 2025-06-03 15:40:20.540631 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:20.540635 | orchestrator | Tuesday 03 June 2025 15:39:05 +0000 (0:00:00.549) 0:09:52.794 ********** 2025-06-03 15:40:20.540639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.540643 | orchestrator | 2025-06-03 15:40:20.540646 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:20.540650 | orchestrator | Tuesday 03 June 2025 15:39:06 +0000 (0:00:00.858) 0:09:53.653 ********** 2025-06-03 15:40:20.540654 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.540658 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.540662 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.540666 | orchestrator | 2025-06-03 15:40:20.540670 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:20.540674 | orchestrator | Tuesday 03 June 2025 15:39:06 +0000 (0:00:00.350) 0:09:54.003 ********** 2025-06-03 15:40:20.540681 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540685 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540688 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540692 | orchestrator | 2025-06-03 15:40:20.540696 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:20.540700 | orchestrator | Tuesday 03 June 2025 15:39:07 +0000 (0:00:00.717) 0:09:54.721 ********** 2025-06-03 15:40:20.540704 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540708 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540712 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540716 | orchestrator | 2025-06-03 15:40:20.540720 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:20.540724 | orchestrator | Tuesday 03 June 2025 15:39:08 +0000 (0:00:00.765) 0:09:55.486 ********** 2025-06-03 15:40:20.540728 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540732 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540738 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540742 | orchestrator | 2025-06-03 15:40:20.540746 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:20.540750 | orchestrator | Tuesday 03 June 2025 15:39:09 +0000 (0:00:01.244) 0:09:56.731 ********** 2025-06-03 15:40:20.540754 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.540758 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.540761 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.540765 | orchestrator | 2025-06-03 15:40:20.540769 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:20.540773 | orchestrator | Tuesday 03 June 2025 15:39:09 +0000 (0:00:00.333) 0:09:57.064 ********** 2025-06-03 15:40:20.540777 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.540781 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.540785 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.540789 | orchestrator | 2025-06-03 15:40:20.540793 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:20.540799 | orchestrator | Tuesday 03 June 2025 15:39:10 +0000 (0:00:00.316) 0:09:57.381 ********** 2025-06-03 15:40:20.540803 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.540807 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.540811 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.540815 | orchestrator | 2025-06-03 15:40:20.540819 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:20.540823 | orchestrator | Tuesday 03 June 2025 15:39:10 +0000 (0:00:00.375) 0:09:57.757 ********** 2025-06-03 15:40:20.540827 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540831 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540834 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540838 | orchestrator | 2025-06-03 15:40:20.540842 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:20.540846 | orchestrator | Tuesday 03 June 2025 15:39:11 +0000 (0:00:01.112) 0:09:58.869 ********** 2025-06-03 15:40:20.540850 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540854 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540858 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540862 | orchestrator | 2025-06-03 15:40:20.540866 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:20.540869 | orchestrator | Tuesday 03 June 2025 15:39:12 +0000 (0:00:00.784) 0:09:59.653 ********** 2025-06-03 15:40:20.540873 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.540877 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.540881 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.540885 | orchestrator | 2025-06-03 15:40:20.540889 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:20.540893 | orchestrator | Tuesday 03 June 2025 15:39:12 +0000 (0:00:00.338) 0:09:59.992 ********** 2025-06-03 15:40:20.540897 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.540901 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.540907 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.540911 | orchestrator | 2025-06-03 15:40:20.540915 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:20.540919 | orchestrator | Tuesday 03 June 2025 15:39:13 +0000 (0:00:00.350) 0:10:00.342 ********** 2025-06-03 15:40:20.540923 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540927 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540931 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540935 | orchestrator | 2025-06-03 15:40:20.540939 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:20.540943 | orchestrator | Tuesday 03 June 2025 15:39:13 +0000 (0:00:00.701) 0:10:01.044 ********** 2025-06-03 15:40:20.540947 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540950 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540954 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540958 | orchestrator | 2025-06-03 15:40:20.540962 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:20.540966 | orchestrator | Tuesday 03 June 2025 15:39:14 +0000 (0:00:00.348) 0:10:01.393 ********** 2025-06-03 15:40:20.540970 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.540974 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.540978 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.540982 | orchestrator | 2025-06-03 15:40:20.540985 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:20.540989 | orchestrator | Tuesday 03 June 2025 15:39:14 +0000 (0:00:00.402) 0:10:01.796 ********** 2025-06-03 15:40:20.540993 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.540997 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.541001 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.541005 | orchestrator | 2025-06-03 15:40:20.541009 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:20.541013 | orchestrator | Tuesday 03 June 2025 15:39:14 +0000 (0:00:00.399) 0:10:02.196 ********** 2025-06-03 15:40:20.541017 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.541020 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.541024 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.541028 | orchestrator | 2025-06-03 15:40:20.541032 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:20.541036 | orchestrator | Tuesday 03 June 2025 15:39:15 +0000 (0:00:00.823) 0:10:03.020 ********** 2025-06-03 15:40:20.541040 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.541044 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.541048 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.541052 | orchestrator | 2025-06-03 15:40:20.541055 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:20.541059 | orchestrator | Tuesday 03 June 2025 15:39:16 +0000 (0:00:00.495) 0:10:03.515 ********** 2025-06-03 15:40:20.541063 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.541067 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.541071 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.541075 | orchestrator | 2025-06-03 15:40:20.541079 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:20.541083 | orchestrator | Tuesday 03 June 2025 15:39:16 +0000 (0:00:00.418) 0:10:03.934 ********** 2025-06-03 15:40:20.541087 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.541091 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.541094 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.541098 | orchestrator | 2025-06-03 15:40:20.541104 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-03 15:40:20.541108 | orchestrator | Tuesday 03 June 2025 15:39:17 +0000 (0:00:00.849) 0:10:04.783 ********** 2025-06-03 15:40:20.541112 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.541116 | orchestrator | 2025-06-03 15:40:20.541120 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-03 15:40:20.541127 | orchestrator | Tuesday 03 June 2025 15:39:18 +0000 (0:00:00.625) 0:10:05.409 ********** 2025-06-03 15:40:20.541131 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.541135 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:20.541139 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:20.541143 | orchestrator | 2025-06-03 15:40:20.541147 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-03 15:40:20.541153 | orchestrator | Tuesday 03 June 2025 15:39:20 +0000 (0:00:02.105) 0:10:07.514 ********** 2025-06-03 15:40:20.541158 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:40:20.541162 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:20.541166 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.541169 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:40:20.541173 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-03 15:40:20.541177 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:40:20.541181 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.541185 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-03 15:40:20.541189 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.541193 | orchestrator | 2025-06-03 15:40:20.541197 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-03 15:40:20.541201 | orchestrator | Tuesday 03 June 2025 15:39:21 +0000 (0:00:01.372) 0:10:08.887 ********** 2025-06-03 15:40:20.541205 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.541209 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.541212 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.541216 | orchestrator | 2025-06-03 15:40:20.541220 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-03 15:40:20.541224 | orchestrator | Tuesday 03 June 2025 15:39:21 +0000 (0:00:00.310) 0:10:09.198 ********** 2025-06-03 15:40:20.541228 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.541232 | orchestrator | 2025-06-03 15:40:20.541236 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-03 15:40:20.541240 | orchestrator | Tuesday 03 June 2025 15:39:22 +0000 (0:00:00.599) 0:10:09.797 ********** 2025-06-03 15:40:20.541244 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.541248 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.541252 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.541256 | orchestrator | 2025-06-03 15:40:20.541260 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-03 15:40:20.541264 | orchestrator | Tuesday 03 June 2025 15:39:24 +0000 (0:00:01.559) 0:10:11.357 ********** 2025-06-03 15:40:20.541268 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.541272 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-03 15:40:20.541276 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.541280 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-03 15:40:20.541284 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.541287 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-03 15:40:20.541294 | orchestrator | 2025-06-03 15:40:20.541298 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-03 15:40:20.541302 | orchestrator | Tuesday 03 June 2025 15:39:28 +0000 (0:00:04.854) 0:10:16.212 ********** 2025-06-03 15:40:20.541306 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.541310 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:20.541314 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.541318 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:20.541322 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:20.541325 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:20.541329 | orchestrator | 2025-06-03 15:40:20.541333 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-03 15:40:20.541337 | orchestrator | Tuesday 03 June 2025 15:39:31 +0000 (0:00:02.572) 0:10:18.785 ********** 2025-06-03 15:40:20.541341 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:40:20.541347 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.541351 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:40:20.541355 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.541359 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:40:20.541363 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.541367 | orchestrator | 2025-06-03 15:40:20.541371 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-03 15:40:20.541375 | orchestrator | Tuesday 03 June 2025 15:39:32 +0000 (0:00:01.183) 0:10:19.968 ********** 2025-06-03 15:40:20.541379 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-03 15:40:20.541383 | orchestrator | 2025-06-03 15:40:20.541386 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-03 15:40:20.541390 | orchestrator | Tuesday 03 June 2025 15:39:32 +0000 (0:00:00.242) 0:10:20.211 ********** 2025-06-03 15:40:20.541397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:20.541401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:20.541405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:20.541409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:20.541413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:20.541417 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.541421 | orchestrator | 2025-06-03 15:40:20.541425 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-03 15:40:20.541429 | orchestrator | Tuesday 03 June 2025 15:39:33 +0000 (0:00:00.885) 0:10:21.096 ********** 2025-06-03 15:40:20.541433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:20.541522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:20.541529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:20.541533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:20.541537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:20.541544 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.541548 | orchestrator | 2025-06-03 15:40:20.541565 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-03 15:40:20.541569 | orchestrator | Tuesday 03 June 2025 15:39:34 +0000 (0:00:01.169) 0:10:22.266 ********** 2025-06-03 15:40:20.541573 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-03 15:40:20.541577 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-03 15:40:20.541581 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-03 15:40:20.541585 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-03 15:40:20.541589 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-03 15:40:20.541593 | orchestrator | 2025-06-03 15:40:20.541597 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-03 15:40:20.541601 | orchestrator | Tuesday 03 June 2025 15:40:06 +0000 (0:00:31.606) 0:10:53.872 ********** 2025-06-03 15:40:20.541605 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.541608 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.541612 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.541616 | orchestrator | 2025-06-03 15:40:20.541620 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-03 15:40:20.541624 | orchestrator | Tuesday 03 June 2025 15:40:06 +0000 (0:00:00.359) 0:10:54.231 ********** 2025-06-03 15:40:20.541628 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.541632 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.541636 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.541640 | orchestrator | 2025-06-03 15:40:20.541644 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-03 15:40:20.541647 | orchestrator | Tuesday 03 June 2025 15:40:07 +0000 (0:00:00.318) 0:10:54.550 ********** 2025-06-03 15:40:20.541651 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.541655 | orchestrator | 2025-06-03 15:40:20.541661 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-03 15:40:20.541666 | orchestrator | Tuesday 03 June 2025 15:40:08 +0000 (0:00:00.790) 0:10:55.340 ********** 2025-06-03 15:40:20.541670 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.541674 | orchestrator | 2025-06-03 15:40:20.541677 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-03 15:40:20.541681 | orchestrator | Tuesday 03 June 2025 15:40:08 +0000 (0:00:00.625) 0:10:55.965 ********** 2025-06-03 15:40:20.541685 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.541689 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.541693 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.541697 | orchestrator | 2025-06-03 15:40:20.541701 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-03 15:40:20.541705 | orchestrator | Tuesday 03 June 2025 15:40:09 +0000 (0:00:01.212) 0:10:57.178 ********** 2025-06-03 15:40:20.541712 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.541716 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.541720 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.541724 | orchestrator | 2025-06-03 15:40:20.541728 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-03 15:40:20.541734 | orchestrator | Tuesday 03 June 2025 15:40:11 +0000 (0:00:01.402) 0:10:58.580 ********** 2025-06-03 15:40:20.541738 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:20.541742 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:20.541746 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:20.541750 | orchestrator | 2025-06-03 15:40:20.541754 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-03 15:40:20.541758 | orchestrator | Tuesday 03 June 2025 15:40:13 +0000 (0:00:01.751) 0:11:00.332 ********** 2025-06-03 15:40:20.541762 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.541766 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.541770 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:20.541774 | orchestrator | 2025-06-03 15:40:20.541778 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:20.541782 | orchestrator | Tuesday 03 June 2025 15:40:15 +0000 (0:00:02.551) 0:11:02.883 ********** 2025-06-03 15:40:20.541786 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.541790 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.541793 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.541797 | orchestrator | 2025-06-03 15:40:20.541801 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-03 15:40:20.541805 | orchestrator | Tuesday 03 June 2025 15:40:15 +0000 (0:00:00.351) 0:11:03.234 ********** 2025-06-03 15:40:20.541809 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:20.541813 | orchestrator | 2025-06-03 15:40:20.541817 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-03 15:40:20.541821 | orchestrator | Tuesday 03 June 2025 15:40:16 +0000 (0:00:00.566) 0:11:03.800 ********** 2025-06-03 15:40:20.541825 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.541829 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.541833 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.541837 | orchestrator | 2025-06-03 15:40:20.541840 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-03 15:40:20.541844 | orchestrator | Tuesday 03 June 2025 15:40:17 +0000 (0:00:00.590) 0:11:04.391 ********** 2025-06-03 15:40:20.541848 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.541852 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:20.541856 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:20.541860 | orchestrator | 2025-06-03 15:40:20.541864 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-03 15:40:20.541868 | orchestrator | Tuesday 03 June 2025 15:40:17 +0000 (0:00:00.428) 0:11:04.820 ********** 2025-06-03 15:40:20.541872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:20.541876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:20.541880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:20.541884 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:20.541887 | orchestrator | 2025-06-03 15:40:20.541891 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-03 15:40:20.541895 | orchestrator | Tuesday 03 June 2025 15:40:18 +0000 (0:00:00.630) 0:11:05.450 ********** 2025-06-03 15:40:20.541899 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:20.541903 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:20.541907 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:20.541911 | orchestrator | 2025-06-03 15:40:20.541915 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:40:20.541919 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-03 15:40:20.541928 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-03 15:40:20.541932 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-03 15:40:20.541939 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-03 15:40:20.541943 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-03 15:40:20.541947 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-03 15:40:20.541951 | orchestrator | 2025-06-03 15:40:20.541955 | orchestrator | 2025-06-03 15:40:20.541959 | orchestrator | 2025-06-03 15:40:20.541963 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:40:20.541967 | orchestrator | Tuesday 03 June 2025 15:40:18 +0000 (0:00:00.286) 0:11:05.736 ********** 2025-06-03 15:40:20.541971 | orchestrator | =============================================================================== 2025-06-03 15:40:20.541974 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 67.06s 2025-06-03 15:40:20.541981 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.99s 2025-06-03 15:40:20.541985 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.61s 2025-06-03 15:40:20.541989 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.73s 2025-06-03 15:40:20.541993 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 16.30s 2025-06-03 15:40:20.541996 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.16s 2025-06-03 15:40:20.542000 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.28s 2025-06-03 15:40:20.542004 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.61s 2025-06-03 15:40:20.542008 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.93s 2025-06-03 15:40:20.542012 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.71s 2025-06-03 15:40:20.542032 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.45s 2025-06-03 15:40:20.542036 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.11s 2025-06-03 15:40:20.542040 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.85s 2025-06-03 15:40:20.542044 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.59s 2025-06-03 15:40:20.542048 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.82s 2025-06-03 15:40:20.542052 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.81s 2025-06-03 15:40:20.542056 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.68s 2025-06-03 15:40:20.542060 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.47s 2025-06-03 15:40:20.542064 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.33s 2025-06-03 15:40:20.542068 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.27s 2025-06-03 15:40:20.542072 | orchestrator | 2025-06-03 15:40:20 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:20.542076 | orchestrator | 2025-06-03 15:40:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:23.557642 | orchestrator | 2025-06-03 15:40:23 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:23.561771 | orchestrator | 2025-06-03 15:40:23 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:23.564324 | orchestrator | 2025-06-03 15:40:23 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:23.565076 | orchestrator | 2025-06-03 15:40:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:26.611109 | orchestrator | 2025-06-03 15:40:26 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:26.612869 | orchestrator | 2025-06-03 15:40:26 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:26.614508 | orchestrator | 2025-06-03 15:40:26 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:26.614573 | orchestrator | 2025-06-03 15:40:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:29.663697 | orchestrator | 2025-06-03 15:40:29 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:29.665314 | orchestrator | 2025-06-03 15:40:29 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:29.668440 | orchestrator | 2025-06-03 15:40:29 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:29.668498 | orchestrator | 2025-06-03 15:40:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:32.717200 | orchestrator | 2025-06-03 15:40:32 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:32.719275 | orchestrator | 2025-06-03 15:40:32 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:32.722273 | orchestrator | 2025-06-03 15:40:32 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:32.722362 | orchestrator | 2025-06-03 15:40:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:35.776606 | orchestrator | 2025-06-03 15:40:35 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:35.777241 | orchestrator | 2025-06-03 15:40:35 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:35.778825 | orchestrator | 2025-06-03 15:40:35 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:35.778889 | orchestrator | 2025-06-03 15:40:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:38.828244 | orchestrator | 2025-06-03 15:40:38 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:38.831351 | orchestrator | 2025-06-03 15:40:38 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:38.833449 | orchestrator | 2025-06-03 15:40:38 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:38.833781 | orchestrator | 2025-06-03 15:40:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:41.872862 | orchestrator | 2025-06-03 15:40:41 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:41.873615 | orchestrator | 2025-06-03 15:40:41 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:41.875161 | orchestrator | 2025-06-03 15:40:41 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:41.875509 | orchestrator | 2025-06-03 15:40:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:44.917783 | orchestrator | 2025-06-03 15:40:44 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:44.919803 | orchestrator | 2025-06-03 15:40:44 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:44.921958 | orchestrator | 2025-06-03 15:40:44 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:44.922073 | orchestrator | 2025-06-03 15:40:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:47.960050 | orchestrator | 2025-06-03 15:40:47 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:47.960180 | orchestrator | 2025-06-03 15:40:47 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:47.960996 | orchestrator | 2025-06-03 15:40:47 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:47.961058 | orchestrator | 2025-06-03 15:40:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:51.018845 | orchestrator | 2025-06-03 15:40:51 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:51.021705 | orchestrator | 2025-06-03 15:40:51 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:51.023807 | orchestrator | 2025-06-03 15:40:51 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:51.023843 | orchestrator | 2025-06-03 15:40:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:54.073149 | orchestrator | 2025-06-03 15:40:54 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:54.074577 | orchestrator | 2025-06-03 15:40:54 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:54.076448 | orchestrator | 2025-06-03 15:40:54 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:54.076486 | orchestrator | 2025-06-03 15:40:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:57.121833 | orchestrator | 2025-06-03 15:40:57 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:40:57.121979 | orchestrator | 2025-06-03 15:40:57 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:40:57.124250 | orchestrator | 2025-06-03 15:40:57 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:40:57.124469 | orchestrator | 2025-06-03 15:40:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:00.157978 | orchestrator | 2025-06-03 15:41:00 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:00.159318 | orchestrator | 2025-06-03 15:41:00 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:00.161900 | orchestrator | 2025-06-03 15:41:00 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:00.161999 | orchestrator | 2025-06-03 15:41:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:03.204077 | orchestrator | 2025-06-03 15:41:03 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:03.205819 | orchestrator | 2025-06-03 15:41:03 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:03.208017 | orchestrator | 2025-06-03 15:41:03 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:03.208108 | orchestrator | 2025-06-03 15:41:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:06.256765 | orchestrator | 2025-06-03 15:41:06 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:06.259543 | orchestrator | 2025-06-03 15:41:06 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:06.261164 | orchestrator | 2025-06-03 15:41:06 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:06.261213 | orchestrator | 2025-06-03 15:41:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:09.306130 | orchestrator | 2025-06-03 15:41:09 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:09.307267 | orchestrator | 2025-06-03 15:41:09 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:09.309094 | orchestrator | 2025-06-03 15:41:09 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:09.309163 | orchestrator | 2025-06-03 15:41:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:12.351917 | orchestrator | 2025-06-03 15:41:12 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:12.353003 | orchestrator | 2025-06-03 15:41:12 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:12.355214 | orchestrator | 2025-06-03 15:41:12 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:12.355275 | orchestrator | 2025-06-03 15:41:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:15.407430 | orchestrator | 2025-06-03 15:41:15 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:15.409374 | orchestrator | 2025-06-03 15:41:15 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:15.411903 | orchestrator | 2025-06-03 15:41:15 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:15.411954 | orchestrator | 2025-06-03 15:41:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:18.468228 | orchestrator | 2025-06-03 15:41:18 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:18.475952 | orchestrator | 2025-06-03 15:41:18 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:18.479038 | orchestrator | 2025-06-03 15:41:18 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:18.479112 | orchestrator | 2025-06-03 15:41:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:21.532753 | orchestrator | 2025-06-03 15:41:21 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:21.535391 | orchestrator | 2025-06-03 15:41:21 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:21.536651 | orchestrator | 2025-06-03 15:41:21 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:21.536706 | orchestrator | 2025-06-03 15:41:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:24.598579 | orchestrator | 2025-06-03 15:41:24 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:24.600672 | orchestrator | 2025-06-03 15:41:24 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:24.605196 | orchestrator | 2025-06-03 15:41:24 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:24.605297 | orchestrator | 2025-06-03 15:41:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:27.657822 | orchestrator | 2025-06-03 15:41:27 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:27.660831 | orchestrator | 2025-06-03 15:41:27 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:27.663159 | orchestrator | 2025-06-03 15:41:27 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:27.664144 | orchestrator | 2025-06-03 15:41:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:30.716957 | orchestrator | 2025-06-03 15:41:30 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:30.718992 | orchestrator | 2025-06-03 15:41:30 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:30.721145 | orchestrator | 2025-06-03 15:41:30 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:30.721216 | orchestrator | 2025-06-03 15:41:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:33.772213 | orchestrator | 2025-06-03 15:41:33 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:33.773647 | orchestrator | 2025-06-03 15:41:33 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:33.775610 | orchestrator | 2025-06-03 15:41:33 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:33.775680 | orchestrator | 2025-06-03 15:41:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:36.822407 | orchestrator | 2025-06-03 15:41:36 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:36.823094 | orchestrator | 2025-06-03 15:41:36 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:36.823129 | orchestrator | 2025-06-03 15:41:36 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:36.823143 | orchestrator | 2025-06-03 15:41:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:39.873715 | orchestrator | 2025-06-03 15:41:39 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:39.873901 | orchestrator | 2025-06-03 15:41:39 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:39.873917 | orchestrator | 2025-06-03 15:41:39 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:39.873943 | orchestrator | 2025-06-03 15:41:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:42.923012 | orchestrator | 2025-06-03 15:41:42 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:42.925040 | orchestrator | 2025-06-03 15:41:42 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:42.926782 | orchestrator | 2025-06-03 15:41:42 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:42.926861 | orchestrator | 2025-06-03 15:41:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:45.976091 | orchestrator | 2025-06-03 15:41:45 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:45.978157 | orchestrator | 2025-06-03 15:41:45 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:45.980921 | orchestrator | 2025-06-03 15:41:45 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state STARTED 2025-06-03 15:41:45.980980 | orchestrator | 2025-06-03 15:41:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:49.024656 | orchestrator | 2025-06-03 15:41:49 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:49.025520 | orchestrator | 2025-06-03 15:41:49 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:49.028806 | orchestrator | 2025-06-03 15:41:49.028868 | orchestrator | 2025-06-03 15:41:49.028879 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:41:49.028910 | orchestrator | 2025-06-03 15:41:49.028918 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:41:49.028927 | orchestrator | Tuesday 03 June 2025 15:38:35 +0000 (0:00:00.271) 0:00:00.271 ********** 2025-06-03 15:41:49.028935 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:49.028945 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:49.028953 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:49.028961 | orchestrator | 2025-06-03 15:41:49.028969 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:41:49.028977 | orchestrator | Tuesday 03 June 2025 15:38:35 +0000 (0:00:00.273) 0:00:00.544 ********** 2025-06-03 15:41:49.028985 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-03 15:41:49.028994 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-03 15:41:49.029002 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-03 15:41:49.029009 | orchestrator | 2025-06-03 15:41:49.029017 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-03 15:41:49.029025 | orchestrator | 2025-06-03 15:41:49.029033 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-03 15:41:49.029052 | orchestrator | Tuesday 03 June 2025 15:38:36 +0000 (0:00:00.420) 0:00:00.965 ********** 2025-06-03 15:41:49.029061 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:49.029069 | orchestrator | 2025-06-03 15:41:49.029077 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-03 15:41:49.029084 | orchestrator | Tuesday 03 June 2025 15:38:36 +0000 (0:00:00.483) 0:00:01.449 ********** 2025-06-03 15:41:49.029092 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:41:49.029100 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:41:49.029108 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:41:49.029116 | orchestrator | 2025-06-03 15:41:49.029124 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-03 15:41:49.029131 | orchestrator | Tuesday 03 June 2025 15:38:37 +0000 (0:00:00.700) 0:00:02.149 ********** 2025-06-03 15:41:49.029143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029229 | orchestrator | 2025-06-03 15:41:49.029237 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-03 15:41:49.029245 | orchestrator | Tuesday 03 June 2025 15:38:39 +0000 (0:00:01.712) 0:00:03.862 ********** 2025-06-03 15:41:49.029254 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:49.029262 | orchestrator | 2025-06-03 15:41:49.029270 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-03 15:41:49.029278 | orchestrator | Tuesday 03 June 2025 15:38:39 +0000 (0:00:00.501) 0:00:04.364 ********** 2025-06-03 15:41:49.029295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029368 | orchestrator | 2025-06-03 15:41:49.029376 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-03 15:41:49.029384 | orchestrator | Tuesday 03 June 2025 15:38:42 +0000 (0:00:02.549) 0:00:06.913 ********** 2025-06-03 15:41:49.029392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:49.029401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:49.029416 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:49.029429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:49.029443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:49.029453 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:49.029464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:49.029474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:49.029517 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:49.029531 | orchestrator | 2025-06-03 15:41:49.029543 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-03 15:41:49.029552 | orchestrator | Tuesday 03 June 2025 15:38:43 +0000 (0:00:01.395) 0:00:08.308 ********** 2025-06-03 15:41:49.029566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:49.029582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:49.029592 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:49.029601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:49.029611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:49.029629 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:49.029643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:49.029657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:49.029666 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:49.029674 | orchestrator | 2025-06-03 15:41:49.029682 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-03 15:41:49.029690 | orchestrator | Tuesday 03 June 2025 15:38:44 +0000 (0:00:00.874) 0:00:09.183 ********** 2025-06-03 15:41:49.029699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029778 | orchestrator | 2025-06-03 15:41:49.029786 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-03 15:41:49.029794 | orchestrator | Tuesday 03 June 2025 15:38:46 +0000 (0:00:02.376) 0:00:11.560 ********** 2025-06-03 15:41:49.029802 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:49.029810 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:49.029818 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:49.029825 | orchestrator | 2025-06-03 15:41:49.029833 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-03 15:41:49.029841 | orchestrator | Tuesday 03 June 2025 15:38:49 +0000 (0:00:02.970) 0:00:14.531 ********** 2025-06-03 15:41:49.029849 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:49.029857 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:49.029865 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:49.029873 | orchestrator | 2025-06-03 15:41:49.029881 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-03 15:41:49.029889 | orchestrator | Tuesday 03 June 2025 15:38:51 +0000 (0:00:01.585) 0:00:16.116 ********** 2025-06-03 15:41:49.029904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:49.029941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:49.029982 | orchestrator | 2025-06-03 15:41:49.029990 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-03 15:41:49.029998 | orchestrator | Tuesday 03 June 2025 15:38:53 +0000 (0:00:02.046) 0:00:18.163 ********** 2025-06-03 15:41:49.030006 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:49.030063 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:49.030074 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:49.030082 | orchestrator | 2025-06-03 15:41:49.030090 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-03 15:41:49.030098 | orchestrator | Tuesday 03 June 2025 15:38:53 +0000 (0:00:00.278) 0:00:18.441 ********** 2025-06-03 15:41:49.030106 | orchestrator | 2025-06-03 15:41:49.030114 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-03 15:41:49.030122 | orchestrator | Tuesday 03 June 2025 15:38:53 +0000 (0:00:00.069) 0:00:18.510 ********** 2025-06-03 15:41:49.030130 | orchestrator | 2025-06-03 15:41:49.030138 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-03 15:41:49.030145 | orchestrator | Tuesday 03 June 2025 15:38:54 +0000 (0:00:00.064) 0:00:18.575 ********** 2025-06-03 15:41:49.030153 | orchestrator | 2025-06-03 15:41:49.030161 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-03 15:41:49.030169 | orchestrator | Tuesday 03 June 2025 15:38:54 +0000 (0:00:00.207) 0:00:18.783 ********** 2025-06-03 15:41:49.030177 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:49.030185 | orchestrator | 2025-06-03 15:41:49.030193 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-03 15:41:49.030201 | orchestrator | Tuesday 03 June 2025 15:38:54 +0000 (0:00:00.181) 0:00:18.964 ********** 2025-06-03 15:41:49.030209 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:49.030217 | orchestrator | 2025-06-03 15:41:49.030225 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-03 15:41:49.030233 | orchestrator | Tuesday 03 June 2025 15:38:54 +0000 (0:00:00.192) 0:00:19.157 ********** 2025-06-03 15:41:49.030240 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:49.030248 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:49.030256 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:49.030264 | orchestrator | 2025-06-03 15:41:49.030272 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-03 15:41:49.030280 | orchestrator | Tuesday 03 June 2025 15:40:17 +0000 (0:01:22.957) 0:01:42.114 ********** 2025-06-03 15:41:49.030288 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:49.030295 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:49.030303 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:49.030311 | orchestrator | 2025-06-03 15:41:49.030319 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-03 15:41:49.030327 | orchestrator | Tuesday 03 June 2025 15:41:36 +0000 (0:01:18.715) 0:03:00.830 ********** 2025-06-03 15:41:49.030335 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:49.030343 | orchestrator | 2025-06-03 15:41:49.030351 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-03 15:41:49.030359 | orchestrator | Tuesday 03 June 2025 15:41:36 +0000 (0:00:00.641) 0:03:01.471 ********** 2025-06-03 15:41:49.030367 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:49.030374 | orchestrator | 2025-06-03 15:41:49.030382 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-03 15:41:49.030390 | orchestrator | Tuesday 03 June 2025 15:41:39 +0000 (0:00:02.438) 0:03:03.910 ********** 2025-06-03 15:41:49.030398 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:49.030406 | orchestrator | 2025-06-03 15:41:49.030414 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-03 15:41:49.030422 | orchestrator | Tuesday 03 June 2025 15:41:41 +0000 (0:00:02.313) 0:03:06.224 ********** 2025-06-03 15:41:49.030430 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:49.030437 | orchestrator | 2025-06-03 15:41:49.030451 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-03 15:41:49.030459 | orchestrator | Tuesday 03 June 2025 15:41:44 +0000 (0:00:02.830) 0:03:09.055 ********** 2025-06-03 15:41:49.030467 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:49.030475 | orchestrator | 2025-06-03 15:41:49.030558 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:41:49.030571 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:41:49.030581 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:41:49.030589 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:41:49.030597 | orchestrator | 2025-06-03 15:41:49.030605 | orchestrator | 2025-06-03 15:41:49.030613 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:41:49.030620 | orchestrator | Tuesday 03 June 2025 15:41:47 +0000 (0:00:02.724) 0:03:11.780 ********** 2025-06-03 15:41:49.030628 | orchestrator | =============================================================================== 2025-06-03 15:41:49.030636 | orchestrator | opensearch : Restart opensearch container ------------------------------ 82.96s 2025-06-03 15:41:49.030648 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 78.72s 2025-06-03 15:41:49.030657 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.97s 2025-06-03 15:41:49.030665 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.83s 2025-06-03 15:41:49.030672 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.72s 2025-06-03 15:41:49.030680 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.55s 2025-06-03 15:41:49.030688 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.44s 2025-06-03 15:41:49.030696 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.38s 2025-06-03 15:41:49.030704 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.31s 2025-06-03 15:41:49.030712 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.05s 2025-06-03 15:41:49.030720 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.71s 2025-06-03 15:41:49.030728 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.59s 2025-06-03 15:41:49.030736 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.40s 2025-06-03 15:41:49.030744 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.87s 2025-06-03 15:41:49.030752 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2025-06-03 15:41:49.030759 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2025-06-03 15:41:49.030767 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2025-06-03 15:41:49.030775 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2025-06-03 15:41:49.030783 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-06-03 15:41:49.030791 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.34s 2025-06-03 15:41:49.030799 | orchestrator | 2025-06-03 15:41:49 | INFO  | Task 17465f90-8a20-465d-b4a5-831ca841f7cd is in state SUCCESS 2025-06-03 15:41:49.030807 | orchestrator | 2025-06-03 15:41:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:52.061785 | orchestrator | 2025-06-03 15:41:52 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state STARTED 2025-06-03 15:41:52.062893 | orchestrator | 2025-06-03 15:41:52 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:52.062959 | orchestrator | 2025-06-03 15:41:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:55.109226 | orchestrator | 2025-06-03 15:41:55 | INFO  | Task cba0b02a-b9bc-430a-9ed9-b1dc2807c96d is in state SUCCESS 2025-06-03 15:41:55.110213 | orchestrator | 2025-06-03 15:41:55.110245 | orchestrator | 2025-06-03 15:41:55.110252 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-03 15:41:55.110258 | orchestrator | 2025-06-03 15:41:55.110263 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-03 15:41:55.110270 | orchestrator | Tuesday 03 June 2025 15:38:35 +0000 (0:00:00.101) 0:00:00.102 ********** 2025-06-03 15:41:55.110276 | orchestrator | ok: [localhost] => { 2025-06-03 15:41:55.110283 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-03 15:41:55.110289 | orchestrator | } 2025-06-03 15:41:55.110295 | orchestrator | 2025-06-03 15:41:55.110301 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-03 15:41:55.110306 | orchestrator | Tuesday 03 June 2025 15:38:35 +0000 (0:00:00.055) 0:00:00.157 ********** 2025-06-03 15:41:55.110312 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-03 15:41:55.110320 | orchestrator | ...ignoring 2025-06-03 15:41:55.110326 | orchestrator | 2025-06-03 15:41:55.110332 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-03 15:41:55.110337 | orchestrator | Tuesday 03 June 2025 15:38:38 +0000 (0:00:02.829) 0:00:02.986 ********** 2025-06-03 15:41:55.110342 | orchestrator | skipping: [localhost] 2025-06-03 15:41:55.110348 | orchestrator | 2025-06-03 15:41:55.110353 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-03 15:41:55.110359 | orchestrator | Tuesday 03 June 2025 15:38:38 +0000 (0:00:00.063) 0:00:03.050 ********** 2025-06-03 15:41:55.110364 | orchestrator | ok: [localhost] 2025-06-03 15:41:55.110370 | orchestrator | 2025-06-03 15:41:55.110376 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:41:55.110381 | orchestrator | 2025-06-03 15:41:55.110387 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:41:55.110392 | orchestrator | Tuesday 03 June 2025 15:38:38 +0000 (0:00:00.178) 0:00:03.229 ********** 2025-06-03 15:41:55.110398 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.110403 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:55.110409 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:55.110414 | orchestrator | 2025-06-03 15:41:55.110420 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:41:55.110425 | orchestrator | Tuesday 03 June 2025 15:38:39 +0000 (0:00:00.331) 0:00:03.561 ********** 2025-06-03 15:41:55.110431 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-03 15:41:55.110437 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-03 15:41:55.110442 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-03 15:41:55.110451 | orchestrator | 2025-06-03 15:41:55.110458 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-03 15:41:55.110466 | orchestrator | 2025-06-03 15:41:55.110792 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-03 15:41:55.110804 | orchestrator | Tuesday 03 June 2025 15:38:39 +0000 (0:00:00.668) 0:00:04.229 ********** 2025-06-03 15:41:55.110810 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:41:55.110819 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-03 15:41:55.110828 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-03 15:41:55.110840 | orchestrator | 2025-06-03 15:41:55.110851 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-03 15:41:55.110858 | orchestrator | Tuesday 03 June 2025 15:38:40 +0000 (0:00:00.384) 0:00:04.613 ********** 2025-06-03 15:41:55.110891 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:55.110901 | orchestrator | 2025-06-03 15:41:55.110909 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-03 15:41:55.110917 | orchestrator | Tuesday 03 June 2025 15:38:40 +0000 (0:00:00.626) 0:00:05.240 ********** 2025-06-03 15:41:55.110946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:55.110965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:55.110984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:55.110994 | orchestrator | 2025-06-03 15:41:55.111011 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-03 15:41:55.111020 | orchestrator | Tuesday 03 June 2025 15:38:43 +0000 (0:00:03.186) 0:00:08.427 ********** 2025-06-03 15:41:55.111029 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.111038 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111046 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111055 | orchestrator | 2025-06-03 15:41:55.111063 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-03 15:41:55.111072 | orchestrator | Tuesday 03 June 2025 15:38:44 +0000 (0:00:00.763) 0:00:09.190 ********** 2025-06-03 15:41:55.111081 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111090 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111099 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.111108 | orchestrator | 2025-06-03 15:41:55.111117 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-03 15:41:55.111125 | orchestrator | Tuesday 03 June 2025 15:38:46 +0000 (0:00:01.537) 0:00:10.728 ********** 2025-06-03 15:41:55.111135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:55.111152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:55.111162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:55.111172 | orchestrator | 2025-06-03 15:41:55.111177 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-03 15:41:55.111182 | orchestrator | Tuesday 03 June 2025 15:38:49 +0000 (0:00:03.585) 0:00:14.314 ********** 2025-06-03 15:41:55.111187 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111192 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111197 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.111202 | orchestrator | 2025-06-03 15:41:55.111207 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-03 15:41:55.111213 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:01.076) 0:00:15.390 ********** 2025-06-03 15:41:55.111218 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.111223 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:55.111228 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:55.111233 | orchestrator | 2025-06-03 15:41:55.111238 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-03 15:41:55.111243 | orchestrator | Tuesday 03 June 2025 15:38:55 +0000 (0:00:04.154) 0:00:19.545 ********** 2025-06-03 15:41:55.111248 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:55.111254 | orchestrator | 2025-06-03 15:41:55.111259 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-03 15:41:55.111264 | orchestrator | Tuesday 03 June 2025 15:38:55 +0000 (0:00:00.512) 0:00:20.058 ********** 2025-06-03 15:41:55.111274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:55.111280 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:55.111302 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.111311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:55.111317 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111322 | orchestrator | 2025-06-03 15:41:55.111327 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-03 15:41:55.111332 | orchestrator | Tuesday 03 June 2025 15:38:58 +0000 (0:00:02.939) 0:00:22.997 ********** 2025-06-03 15:41:55.111341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:55.111350 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.111360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:55.111366 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:55.111385 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111391 | orchestrator | 2025-06-03 15:41:55.111397 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-03 15:41:55.111403 | orchestrator | Tuesday 03 June 2025 15:39:01 +0000 (0:00:02.955) 0:00:25.953 ********** 2025-06-03 15:41:55.111413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:55.111426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:55.111437 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111443 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:55.111457 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.111462 | orchestrator | 2025-06-03 15:41:55.111468 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-03 15:41:55.111475 | orchestrator | Tuesday 03 June 2025 15:39:04 +0000 (0:00:03.037) 0:00:28.990 ********** 2025-06-03 15:41:55.111534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:55.111548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:55.111559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:55.111570 | orchestrator | 2025-06-03 15:41:55.111575 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-03 15:41:55.111580 | orchestrator | Tuesday 03 June 2025 15:39:08 +0000 (0:00:03.557) 0:00:32.547 ********** 2025-06-03 15:41:55.111585 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.111590 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:55.111595 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:55.111600 | orchestrator | 2025-06-03 15:41:55.111606 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-03 15:41:55.111611 | orchestrator | Tuesday 03 June 2025 15:39:09 +0000 (0:00:01.208) 0:00:33.756 ********** 2025-06-03 15:41:55.111616 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.111621 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:55.111627 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:55.111632 | orchestrator | 2025-06-03 15:41:55.111639 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-03 15:41:55.111645 | orchestrator | Tuesday 03 June 2025 15:39:09 +0000 (0:00:00.373) 0:00:34.129 ********** 2025-06-03 15:41:55.111650 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.111655 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:55.111660 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:55.111665 | orchestrator | 2025-06-03 15:41:55.111670 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-03 15:41:55.111675 | orchestrator | Tuesday 03 June 2025 15:39:09 +0000 (0:00:00.346) 0:00:34.475 ********** 2025-06-03 15:41:55.111681 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-03 15:41:55.111687 | orchestrator | ...ignoring 2025-06-03 15:41:55.111693 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-03 15:41:55.111698 | orchestrator | ...ignoring 2025-06-03 15:41:55.111703 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-03 15:41:55.111709 | orchestrator | ...ignoring 2025-06-03 15:41:55.111714 | orchestrator | 2025-06-03 15:41:55.111719 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-03 15:41:55.111724 | orchestrator | Tuesday 03 June 2025 15:39:21 +0000 (0:00:11.156) 0:00:45.632 ********** 2025-06-03 15:41:55.111729 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.111734 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:55.111740 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:55.111745 | orchestrator | 2025-06-03 15:41:55.111750 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-03 15:41:55.111755 | orchestrator | Tuesday 03 June 2025 15:39:21 +0000 (0:00:00.687) 0:00:46.320 ********** 2025-06-03 15:41:55.111764 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.111769 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111775 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111780 | orchestrator | 2025-06-03 15:41:55.111785 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-03 15:41:55.111790 | orchestrator | Tuesday 03 June 2025 15:39:22 +0000 (0:00:00.459) 0:00:46.780 ********** 2025-06-03 15:41:55.111795 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.111800 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111805 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111810 | orchestrator | 2025-06-03 15:41:55.111815 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-03 15:41:55.111821 | orchestrator | Tuesday 03 June 2025 15:39:22 +0000 (0:00:00.476) 0:00:47.256 ********** 2025-06-03 15:41:55.111826 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.111831 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111836 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111841 | orchestrator | 2025-06-03 15:41:55.111846 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-03 15:41:55.111854 | orchestrator | Tuesday 03 June 2025 15:39:23 +0000 (0:00:00.558) 0:00:47.815 ********** 2025-06-03 15:41:55.111860 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.111865 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:55.111870 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:55.111875 | orchestrator | 2025-06-03 15:41:55.111880 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-03 15:41:55.111885 | orchestrator | Tuesday 03 June 2025 15:39:24 +0000 (0:00:00.832) 0:00:48.648 ********** 2025-06-03 15:41:55.111890 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.111895 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111900 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111905 | orchestrator | 2025-06-03 15:41:55.111910 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-03 15:41:55.111916 | orchestrator | Tuesday 03 June 2025 15:39:24 +0000 (0:00:00.527) 0:00:49.175 ********** 2025-06-03 15:41:55.111921 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.111926 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.111931 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-03 15:41:55.111936 | orchestrator | 2025-06-03 15:41:55.111941 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-03 15:41:55.111946 | orchestrator | Tuesday 03 June 2025 15:39:25 +0000 (0:00:00.398) 0:00:49.574 ********** 2025-06-03 15:41:55.111951 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.111956 | orchestrator | 2025-06-03 15:41:55.111961 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-03 15:41:55.111966 | orchestrator | Tuesday 03 June 2025 15:39:36 +0000 (0:00:11.586) 0:01:01.160 ********** 2025-06-03 15:41:55.111972 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.111977 | orchestrator | 2025-06-03 15:41:55.111982 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-03 15:41:55.111987 | orchestrator | Tuesday 03 June 2025 15:39:36 +0000 (0:00:00.103) 0:01:01.263 ********** 2025-06-03 15:41:55.111992 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.111997 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.112002 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.112007 | orchestrator | 2025-06-03 15:41:55.112012 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-03 15:41:55.112017 | orchestrator | Tuesday 03 June 2025 15:39:37 +0000 (0:00:00.878) 0:01:02.142 ********** 2025-06-03 15:41:55.112022 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.112027 | orchestrator | 2025-06-03 15:41:55.112033 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-03 15:41:55.112038 | orchestrator | Tuesday 03 June 2025 15:39:44 +0000 (0:00:07.251) 0:01:09.393 ********** 2025-06-03 15:41:55.112047 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.112052 | orchestrator | 2025-06-03 15:41:55.112057 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-03 15:41:55.112065 | orchestrator | Tuesday 03 June 2025 15:39:47 +0000 (0:00:02.541) 0:01:11.935 ********** 2025-06-03 15:41:55.112071 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.112076 | orchestrator | 2025-06-03 15:41:55.112081 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-03 15:41:55.112086 | orchestrator | Tuesday 03 June 2025 15:39:50 +0000 (0:00:02.699) 0:01:14.635 ********** 2025-06-03 15:41:55.112091 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.112096 | orchestrator | 2025-06-03 15:41:55.112101 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-03 15:41:55.112106 | orchestrator | Tuesday 03 June 2025 15:39:50 +0000 (0:00:00.129) 0:01:14.764 ********** 2025-06-03 15:41:55.112111 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.112116 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.112122 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.112127 | orchestrator | 2025-06-03 15:41:55.112132 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-03 15:41:55.112137 | orchestrator | Tuesday 03 June 2025 15:39:50 +0000 (0:00:00.533) 0:01:15.297 ********** 2025-06-03 15:41:55.112142 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.112147 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-03 15:41:55.112152 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:55.112157 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:55.112162 | orchestrator | 2025-06-03 15:41:55.112167 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-03 15:41:55.112172 | orchestrator | skipping: no hosts matched 2025-06-03 15:41:55.112177 | orchestrator | 2025-06-03 15:41:55.112183 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-03 15:41:55.112188 | orchestrator | 2025-06-03 15:41:55.112193 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-03 15:41:55.112198 | orchestrator | Tuesday 03 June 2025 15:39:51 +0000 (0:00:00.346) 0:01:15.643 ********** 2025-06-03 15:41:55.112203 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:55.112208 | orchestrator | 2025-06-03 15:41:55.112213 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-03 15:41:55.112218 | orchestrator | Tuesday 03 June 2025 15:40:17 +0000 (0:00:26.433) 0:01:42.077 ********** 2025-06-03 15:41:55.112224 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:55.112229 | orchestrator | 2025-06-03 15:41:55.112234 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-03 15:41:55.112239 | orchestrator | Tuesday 03 June 2025 15:40:33 +0000 (0:00:15.644) 0:01:57.722 ********** 2025-06-03 15:41:55.112244 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:55.112249 | orchestrator | 2025-06-03 15:41:55.112254 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-03 15:41:55.112259 | orchestrator | 2025-06-03 15:41:55.112264 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-03 15:41:55.112269 | orchestrator | Tuesday 03 June 2025 15:40:35 +0000 (0:00:02.490) 0:02:00.213 ********** 2025-06-03 15:41:55.112352 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:55.112358 | orchestrator | 2025-06-03 15:41:55.112363 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-03 15:41:55.112371 | orchestrator | Tuesday 03 June 2025 15:41:00 +0000 (0:00:24.402) 0:02:24.615 ********** 2025-06-03 15:41:55.112377 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:55.112382 | orchestrator | 2025-06-03 15:41:55.112387 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-03 15:41:55.112392 | orchestrator | Tuesday 03 June 2025 15:41:16 +0000 (0:00:16.572) 0:02:41.188 ********** 2025-06-03 15:41:55.112397 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:55.112407 | orchestrator | 2025-06-03 15:41:55.112412 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-03 15:41:55.112418 | orchestrator | 2025-06-03 15:41:55.112423 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-03 15:41:55.112428 | orchestrator | Tuesday 03 June 2025 15:41:19 +0000 (0:00:02.735) 0:02:43.923 ********** 2025-06-03 15:41:55.112433 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.112438 | orchestrator | 2025-06-03 15:41:55.112443 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-03 15:41:55.112448 | orchestrator | Tuesday 03 June 2025 15:41:31 +0000 (0:00:11.869) 0:02:55.792 ********** 2025-06-03 15:41:55.112453 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.112459 | orchestrator | 2025-06-03 15:41:55.112464 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-03 15:41:55.112469 | orchestrator | Tuesday 03 June 2025 15:41:36 +0000 (0:00:05.612) 0:03:01.405 ********** 2025-06-03 15:41:55.112474 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.112504 | orchestrator | 2025-06-03 15:41:55.112510 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-03 15:41:55.112522 | orchestrator | 2025-06-03 15:41:55.112527 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-03 15:41:55.112532 | orchestrator | Tuesday 03 June 2025 15:41:39 +0000 (0:00:02.413) 0:03:03.819 ********** 2025-06-03 15:41:55.112537 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:55.112542 | orchestrator | 2025-06-03 15:41:55.112548 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-03 15:41:55.112553 | orchestrator | Tuesday 03 June 2025 15:41:39 +0000 (0:00:00.542) 0:03:04.361 ********** 2025-06-03 15:41:55.112558 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.112563 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.112568 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.112573 | orchestrator | 2025-06-03 15:41:55.112578 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-03 15:41:55.112583 | orchestrator | Tuesday 03 June 2025 15:41:42 +0000 (0:00:02.499) 0:03:06.861 ********** 2025-06-03 15:41:55.112588 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.112593 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.112598 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.112603 | orchestrator | 2025-06-03 15:41:55.112608 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-03 15:41:55.112613 | orchestrator | Tuesday 03 June 2025 15:41:44 +0000 (0:00:02.482) 0:03:09.344 ********** 2025-06-03 15:41:55.112622 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.112627 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.112632 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.112637 | orchestrator | 2025-06-03 15:41:55.112642 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-03 15:41:55.112647 | orchestrator | Tuesday 03 June 2025 15:41:47 +0000 (0:00:02.334) 0:03:11.679 ********** 2025-06-03 15:41:55.112652 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.112657 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.112662 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:55.112668 | orchestrator | 2025-06-03 15:41:55.112673 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-03 15:41:55.112678 | orchestrator | Tuesday 03 June 2025 15:41:49 +0000 (0:00:02.228) 0:03:13.907 ********** 2025-06-03 15:41:55.112683 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:55.112688 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:55.112693 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:55.112698 | orchestrator | 2025-06-03 15:41:55.112703 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-03 15:41:55.112708 | orchestrator | Tuesday 03 June 2025 15:41:52 +0000 (0:00:02.640) 0:03:16.547 ********** 2025-06-03 15:41:55.112717 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:55.112722 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:55.112728 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:55.112733 | orchestrator | 2025-06-03 15:41:55.112738 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:41:55.112743 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-03 15:41:55.112749 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-03 15:41:55.112756 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-03 15:41:55.112763 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-03 15:41:55.112785 | orchestrator | 2025-06-03 15:41:55.112794 | orchestrator | 2025-06-03 15:41:55.112801 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:41:55.112809 | orchestrator | Tuesday 03 June 2025 15:41:52 +0000 (0:00:00.186) 0:03:16.734 ********** 2025-06-03 15:41:55.112817 | orchestrator | =============================================================================== 2025-06-03 15:41:55.112824 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 50.84s 2025-06-03 15:41:55.112832 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.22s 2025-06-03 15:41:55.112844 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.87s 2025-06-03 15:41:55.112853 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.59s 2025-06-03 15:41:55.112861 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.16s 2025-06-03 15:41:55.112870 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.25s 2025-06-03 15:41:55.112878 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.61s 2025-06-03 15:41:55.112886 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.23s 2025-06-03 15:41:55.112894 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.15s 2025-06-03 15:41:55.112902 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.59s 2025-06-03 15:41:55.112911 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.56s 2025-06-03 15:41:55.112918 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.19s 2025-06-03 15:41:55.112923 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.04s 2025-06-03 15:41:55.112928 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.96s 2025-06-03 15:41:55.112933 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.94s 2025-06-03 15:41:55.112938 | orchestrator | Check MariaDB service --------------------------------------------------- 2.83s 2025-06-03 15:41:55.112943 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.70s 2025-06-03 15:41:55.112949 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.64s 2025-06-03 15:41:55.112954 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.54s 2025-06-03 15:41:55.112959 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.50s 2025-06-03 15:41:55.112964 | orchestrator | 2025-06-03 15:41:55 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:41:55.112969 | orchestrator | 2025-06-03 15:41:55 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:55.113365 | orchestrator | 2025-06-03 15:41:55 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:41:55.113384 | orchestrator | 2025-06-03 15:41:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:58.170439 | orchestrator | 2025-06-03 15:41:58 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:41:58.170757 | orchestrator | 2025-06-03 15:41:58 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:41:58.171988 | orchestrator | 2025-06-03 15:41:58 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:41:58.172077 | orchestrator | 2025-06-03 15:41:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:01.227733 | orchestrator | 2025-06-03 15:42:01 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:01.229004 | orchestrator | 2025-06-03 15:42:01 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:01.231043 | orchestrator | 2025-06-03 15:42:01 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:01.231087 | orchestrator | 2025-06-03 15:42:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:04.273197 | orchestrator | 2025-06-03 15:42:04 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:04.274655 | orchestrator | 2025-06-03 15:42:04 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:04.277189 | orchestrator | 2025-06-03 15:42:04 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:04.277212 | orchestrator | 2025-06-03 15:42:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:07.315413 | orchestrator | 2025-06-03 15:42:07 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:07.315607 | orchestrator | 2025-06-03 15:42:07 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:07.315626 | orchestrator | 2025-06-03 15:42:07 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:07.315638 | orchestrator | 2025-06-03 15:42:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:10.362556 | orchestrator | 2025-06-03 15:42:10 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:10.365462 | orchestrator | 2025-06-03 15:42:10 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:10.367207 | orchestrator | 2025-06-03 15:42:10 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:10.367260 | orchestrator | 2025-06-03 15:42:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:13.419778 | orchestrator | 2025-06-03 15:42:13 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:13.421310 | orchestrator | 2025-06-03 15:42:13 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:13.423521 | orchestrator | 2025-06-03 15:42:13 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:13.423562 | orchestrator | 2025-06-03 15:42:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:16.468955 | orchestrator | 2025-06-03 15:42:16 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:16.469071 | orchestrator | 2025-06-03 15:42:16 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:16.469087 | orchestrator | 2025-06-03 15:42:16 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:16.469100 | orchestrator | 2025-06-03 15:42:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:19.499983 | orchestrator | 2025-06-03 15:42:19 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:19.500509 | orchestrator | 2025-06-03 15:42:19 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:19.501785 | orchestrator | 2025-06-03 15:42:19 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:19.501834 | orchestrator | 2025-06-03 15:42:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:22.549498 | orchestrator | 2025-06-03 15:42:22 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:22.551275 | orchestrator | 2025-06-03 15:42:22 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:22.554325 | orchestrator | 2025-06-03 15:42:22 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:22.554399 | orchestrator | 2025-06-03 15:42:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:25.597641 | orchestrator | 2025-06-03 15:42:25 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:25.601787 | orchestrator | 2025-06-03 15:42:25 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:25.604334 | orchestrator | 2025-06-03 15:42:25 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:25.605181 | orchestrator | 2025-06-03 15:42:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:28.646390 | orchestrator | 2025-06-03 15:42:28 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:28.649874 | orchestrator | 2025-06-03 15:42:28 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:28.654249 | orchestrator | 2025-06-03 15:42:28 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:28.654339 | orchestrator | 2025-06-03 15:42:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:31.708558 | orchestrator | 2025-06-03 15:42:31 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:31.708674 | orchestrator | 2025-06-03 15:42:31 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:31.708882 | orchestrator | 2025-06-03 15:42:31 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:31.709114 | orchestrator | 2025-06-03 15:42:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:34.759556 | orchestrator | 2025-06-03 15:42:34 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:34.760964 | orchestrator | 2025-06-03 15:42:34 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state STARTED 2025-06-03 15:42:34.763620 | orchestrator | 2025-06-03 15:42:34 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:34.763700 | orchestrator | 2025-06-03 15:42:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:37.808133 | orchestrator | 2025-06-03 15:42:37 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:37.808223 | orchestrator | 2025-06-03 15:42:37 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state STARTED 2025-06-03 15:42:37.808242 | orchestrator | 2025-06-03 15:42:37 | INFO  | Task 8a69f4a1-9db8-4078-9e40-431affbc3f75 is in state SUCCESS 2025-06-03 15:42:37.809708 | orchestrator | 2025-06-03 15:42:37.809758 | orchestrator | 2025-06-03 15:42:37.809765 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-03 15:42:37.809795 | orchestrator | 2025-06-03 15:42:37.809801 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-03 15:42:37.809808 | orchestrator | Tuesday 03 June 2025 15:40:23 +0000 (0:00:00.756) 0:00:00.756 ********** 2025-06-03 15:42:37.809814 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:42:37.809821 | orchestrator | 2025-06-03 15:42:37.809931 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-03 15:42:37.809942 | orchestrator | Tuesday 03 June 2025 15:40:24 +0000 (0:00:00.792) 0:00:01.549 ********** 2025-06-03 15:42:37.809948 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.809955 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.809960 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.809965 | orchestrator | 2025-06-03 15:42:37.809971 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-03 15:42:37.809976 | orchestrator | Tuesday 03 June 2025 15:40:25 +0000 (0:00:00.699) 0:00:02.248 ********** 2025-06-03 15:42:37.809982 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.809987 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.809993 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.810225 | orchestrator | 2025-06-03 15:42:37.810240 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-03 15:42:37.810246 | orchestrator | Tuesday 03 June 2025 15:40:25 +0000 (0:00:00.326) 0:00:02.575 ********** 2025-06-03 15:42:37.810251 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.810258 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.810264 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.810270 | orchestrator | 2025-06-03 15:42:37.810277 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-03 15:42:37.810283 | orchestrator | Tuesday 03 June 2025 15:40:26 +0000 (0:00:00.868) 0:00:03.443 ********** 2025-06-03 15:42:37.810290 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.810295 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.810328 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.810335 | orchestrator | 2025-06-03 15:42:37.810342 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-03 15:42:37.810348 | orchestrator | Tuesday 03 June 2025 15:40:26 +0000 (0:00:00.294) 0:00:03.737 ********** 2025-06-03 15:42:37.810354 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.810361 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.810366 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.810372 | orchestrator | 2025-06-03 15:42:37.810378 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-03 15:42:37.810384 | orchestrator | Tuesday 03 June 2025 15:40:26 +0000 (0:00:00.277) 0:00:04.015 ********** 2025-06-03 15:42:37.810390 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.810429 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.810434 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.810437 | orchestrator | 2025-06-03 15:42:37.810442 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-03 15:42:37.810493 | orchestrator | Tuesday 03 June 2025 15:40:27 +0000 (0:00:00.300) 0:00:04.315 ********** 2025-06-03 15:42:37.810499 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.810503 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.810507 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.810511 | orchestrator | 2025-06-03 15:42:37.810515 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-03 15:42:37.810518 | orchestrator | Tuesday 03 June 2025 15:40:27 +0000 (0:00:00.465) 0:00:04.781 ********** 2025-06-03 15:42:37.810621 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.810628 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.810632 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.810635 | orchestrator | 2025-06-03 15:42:37.810639 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-03 15:42:37.810643 | orchestrator | Tuesday 03 June 2025 15:40:27 +0000 (0:00:00.279) 0:00:05.060 ********** 2025-06-03 15:42:37.810656 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-03 15:42:37.810660 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:42:37.810664 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:42:37.810668 | orchestrator | 2025-06-03 15:42:37.810672 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-03 15:42:37.810675 | orchestrator | Tuesday 03 June 2025 15:40:28 +0000 (0:00:00.613) 0:00:05.674 ********** 2025-06-03 15:42:37.810679 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.810683 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.810687 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.810690 | orchestrator | 2025-06-03 15:42:37.810694 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-03 15:42:37.810698 | orchestrator | Tuesday 03 June 2025 15:40:28 +0000 (0:00:00.407) 0:00:06.081 ********** 2025-06-03 15:42:37.810702 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-03 15:42:37.810730 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:42:37.810735 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:42:37.810739 | orchestrator | 2025-06-03 15:42:37.810742 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-03 15:42:37.810747 | orchestrator | Tuesday 03 June 2025 15:40:31 +0000 (0:00:02.122) 0:00:08.204 ********** 2025-06-03 15:42:37.810751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-03 15:42:37.810755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-03 15:42:37.810759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-03 15:42:37.810762 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.810766 | orchestrator | 2025-06-03 15:42:37.810770 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-03 15:42:37.810792 | orchestrator | Tuesday 03 June 2025 15:40:31 +0000 (0:00:00.404) 0:00:08.609 ********** 2025-06-03 15:42:37.810799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.810806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.810811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.810817 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.810823 | orchestrator | 2025-06-03 15:42:37.810828 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-03 15:42:37.810833 | orchestrator | Tuesday 03 June 2025 15:40:32 +0000 (0:00:00.801) 0:00:09.411 ********** 2025-06-03 15:42:37.810844 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.810855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.810872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.810878 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.810884 | orchestrator | 2025-06-03 15:42:37.810890 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-03 15:42:37.810895 | orchestrator | Tuesday 03 June 2025 15:40:32 +0000 (0:00:00.150) 0:00:09.561 ********** 2025-06-03 15:42:37.810902 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fd85f8e36d0c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-03 15:40:29.652021', 'end': '2025-06-03 15:40:29.698026', 'delta': '0:00:00.046005', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fd85f8e36d0c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-03 15:42:37.810911 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5c92c998442e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-03 15:40:30.372191', 'end': '2025-06-03 15:40:30.409087', 'delta': '0:00:00.036896', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c92c998442e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-03 15:42:37.810970 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '586ffcfd7931', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-03 15:40:30.904543', 'end': '2025-06-03 15:40:30.945637', 'delta': '0:00:00.041094', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['586ffcfd7931'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-03 15:42:37.810980 | orchestrator | 2025-06-03 15:42:37.810986 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-03 15:42:37.810992 | orchestrator | Tuesday 03 June 2025 15:40:32 +0000 (0:00:00.369) 0:00:09.930 ********** 2025-06-03 15:42:37.810996 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.811000 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.811004 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.811008 | orchestrator | 2025-06-03 15:42:37.811012 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-03 15:42:37.811016 | orchestrator | Tuesday 03 June 2025 15:40:33 +0000 (0:00:00.466) 0:00:10.396 ********** 2025-06-03 15:42:37.811020 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-03 15:42:37.811029 | orchestrator | 2025-06-03 15:42:37.811033 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-03 15:42:37.811037 | orchestrator | Tuesday 03 June 2025 15:40:35 +0000 (0:00:01.786) 0:00:12.183 ********** 2025-06-03 15:42:37.811040 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811044 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811048 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811052 | orchestrator | 2025-06-03 15:42:37.811055 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-03 15:42:37.811059 | orchestrator | Tuesday 03 June 2025 15:40:35 +0000 (0:00:00.294) 0:00:12.477 ********** 2025-06-03 15:42:37.811063 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811067 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811070 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811074 | orchestrator | 2025-06-03 15:42:37.811078 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-03 15:42:37.811082 | orchestrator | Tuesday 03 June 2025 15:40:35 +0000 (0:00:00.394) 0:00:12.872 ********** 2025-06-03 15:42:37.811085 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811089 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811093 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811097 | orchestrator | 2025-06-03 15:42:37.811101 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-03 15:42:37.811105 | orchestrator | Tuesday 03 June 2025 15:40:36 +0000 (0:00:00.480) 0:00:13.352 ********** 2025-06-03 15:42:37.811108 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.811112 | orchestrator | 2025-06-03 15:42:37.811116 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-03 15:42:37.811120 | orchestrator | Tuesday 03 June 2025 15:40:36 +0000 (0:00:00.130) 0:00:13.483 ********** 2025-06-03 15:42:37.811124 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811128 | orchestrator | 2025-06-03 15:42:37.811132 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-03 15:42:37.811165 | orchestrator | Tuesday 03 June 2025 15:40:36 +0000 (0:00:00.249) 0:00:13.733 ********** 2025-06-03 15:42:37.811169 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811173 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811176 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811180 | orchestrator | 2025-06-03 15:42:37.811184 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-03 15:42:37.811188 | orchestrator | Tuesday 03 June 2025 15:40:36 +0000 (0:00:00.295) 0:00:14.028 ********** 2025-06-03 15:42:37.811191 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811195 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811199 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811202 | orchestrator | 2025-06-03 15:42:37.811206 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-03 15:42:37.811210 | orchestrator | Tuesday 03 June 2025 15:40:37 +0000 (0:00:00.354) 0:00:14.382 ********** 2025-06-03 15:42:37.811213 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811217 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811221 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811225 | orchestrator | 2025-06-03 15:42:37.811228 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-03 15:42:37.811232 | orchestrator | Tuesday 03 June 2025 15:40:37 +0000 (0:00:00.615) 0:00:14.998 ********** 2025-06-03 15:42:37.811236 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811239 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811243 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811247 | orchestrator | 2025-06-03 15:42:37.811250 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-03 15:42:37.811254 | orchestrator | Tuesday 03 June 2025 15:40:38 +0000 (0:00:00.335) 0:00:15.333 ********** 2025-06-03 15:42:37.811261 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811265 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811269 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811272 | orchestrator | 2025-06-03 15:42:37.811276 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-03 15:42:37.811280 | orchestrator | Tuesday 03 June 2025 15:40:38 +0000 (0:00:00.291) 0:00:15.624 ********** 2025-06-03 15:42:37.811284 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811287 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811291 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811295 | orchestrator | 2025-06-03 15:42:37.811299 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-03 15:42:37.811319 | orchestrator | Tuesday 03 June 2025 15:40:38 +0000 (0:00:00.264) 0:00:15.889 ********** 2025-06-03 15:42:37.811324 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811328 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811332 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811335 | orchestrator | 2025-06-03 15:42:37.811339 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-03 15:42:37.811343 | orchestrator | Tuesday 03 June 2025 15:40:39 +0000 (0:00:00.425) 0:00:16.314 ********** 2025-06-03 15:42:37.811348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a5276575--f764--5428--894d--d125091c496f-osd--block--a5276575--f764--5428--894d--d125091c496f', 'dm-uuid-LVM-nRGGPaStpf29XH9PEFiJRgvLNzQzUF0gerYnP8cTcH9vwrCe8WxdOsBU1eSIbIrQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a443cc3--e60d--5588--869b--39e93dfe07d6-osd--block--6a443cc3--e60d--5588--869b--39e93dfe07d6', 'dm-uuid-LVM-IJupnY7jw4zZIhHRi8XfW4ylftnbUxEodz46P8IGX2f1J5WOOoqYFRFb6vaoDnJW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e839e97--cc3d--5431--ae91--f94b997cade9-osd--block--8e839e97--cc3d--5431--ae91--f94b997cade9', 'dm-uuid-LVM-gYcuttOc0Nsrc1gF55i0dQSUdy23zEIrf1Rj8ySrnDtugXtGF8mEf160mWrRLyjO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1191cd60--4b8c--5454--8e42--9818af3c2595-osd--block--1191cd60--4b8c--5454--8e42--9818af3c2595', 'dm-uuid-LVM-NucV1Eabq1nHybqCjjD5eQKyszZctw33gCYxE9GWcC0Qbc0ALYU7xKpyegXBvmIQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part1', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part14', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part15', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part16', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a5276575--f764--5428--894d--d125091c496f-osd--block--a5276575--f764--5428--894d--d125091c496f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8xz9pM-8Jia-cKtn-lqgw-8Ibt-cWui-cV2SXp', 'scsi-0QEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e', 'scsi-SQEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6a443cc3--e60d--5588--869b--39e93dfe07d6-osd--block--6a443cc3--e60d--5588--869b--39e93dfe07d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Hwp9sC-fQdV-TeI0-ezcd-CfFv-VmPG-5DRkFi', 'scsi-0QEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2', 'scsi-SQEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5', 'scsi-SQEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811537 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part1', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part14', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part15', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part16', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53b632c4--9781--517b--ad8e--3b37c9789a01-osd--block--53b632c4--9781--517b--ad8e--3b37c9789a01', 'dm-uuid-LVM-UtCBhN7ekwDglfkwPU5DbbuGlpfvVLSwBka3LgpTl8Lccw3S0l125OrhR4Kqu1yj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8e839e97--cc3d--5431--ae91--f94b997cade9-osd--block--8e839e97--cc3d--5431--ae91--f94b997cade9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z5SOHc-LMIV-Hnzh-9Kru-F05l-1qWm-9j1z7i', 'scsi-0QEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81', 'scsi-SQEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5-osd--block--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5', 'dm-uuid-LVM-p6jyVjBaN36kCqNbczwHStJEw3wpSqPf2EJHcEOZJK3L7OfNvBvO6tOL8SFtY98W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1191cd60--4b8c--5454--8e42--9818af3c2595-osd--block--1191cd60--4b8c--5454--8e42--9818af3c2595'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6GtL0i-ZCvM-RFD1-a3yQ-P1i5-4296-CIdOY0', 'scsi-0QEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35', 'scsi-SQEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144', 'scsi-SQEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811625 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:42:37.811663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part1', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part14', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part15', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part16', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--53b632c4--9781--517b--ad8e--3b37c9789a01-osd--block--53b632c4--9781--517b--ad8e--3b37c9789a01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RcY2qy-IZnS-duNN-Nt07-lHNx-kgon-LcaO84', 'scsi-0QEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9', 'scsi-SQEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5-osd--block--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QKTKdO-gmhA-eDdm-Bbme-bRgB-KIEK-fW57I9', 'scsi-0QEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057', 'scsi-SQEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447', 'scsi-SQEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:42:37.811696 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.811700 | orchestrator | 2025-06-03 15:42:37.811705 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-03 15:42:37.811709 | orchestrator | Tuesday 03 June 2025 15:40:39 +0000 (0:00:00.466) 0:00:16.780 ********** 2025-06-03 15:42:37.811715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a5276575--f764--5428--894d--d125091c496f-osd--block--a5276575--f764--5428--894d--d125091c496f', 'dm-uuid-LVM-nRGGPaStpf29XH9PEFiJRgvLNzQzUF0gerYnP8cTcH9vwrCe8WxdOsBU1eSIbIrQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811720 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a443cc3--e60d--5588--869b--39e93dfe07d6-osd--block--6a443cc3--e60d--5588--869b--39e93dfe07d6', 'dm-uuid-LVM-IJupnY7jw4zZIhHRi8XfW4ylftnbUxEodz46P8IGX2f1J5WOOoqYFRFb6vaoDnJW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811730 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811739 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811752 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811772 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811776 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8e839e97--cc3d--5431--ae91--f94b997cade9-osd--block--8e839e97--cc3d--5431--ae91--f94b997cade9', 'dm-uuid-LVM-gYcuttOc0Nsrc1gF55i0dQSUdy23zEIrf1Rj8ySrnDtugXtGF8mEf160mWrRLyjO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811785 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part1', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part14', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part15', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part16', 'scsi-SQEMU_QEMU_HARDDISK_f0290b61-6b8b-4cc7-ab0c-9f653b503509-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811798 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1191cd60--4b8c--5454--8e42--9818af3c2595-osd--block--1191cd60--4b8c--5454--8e42--9818af3c2595', 'dm-uuid-LVM-NucV1Eabq1nHybqCjjD5eQKyszZctw33gCYxE9GWcC0Qbc0ALYU7xKpyegXBvmIQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811803 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a5276575--f764--5428--894d--d125091c496f-osd--block--a5276575--f764--5428--894d--d125091c496f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8xz9pM-8Jia-cKtn-lqgw-8Ibt-cWui-cV2SXp', 'scsi-0QEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e', 'scsi-SQEMU_QEMU_HARDDISK_ed9de92b-af3d-4178-85d8-fb362235eb6e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811808 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6a443cc3--e60d--5588--869b--39e93dfe07d6-osd--block--6a443cc3--e60d--5588--869b--39e93dfe07d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Hwp9sC-fQdV-TeI0-ezcd-CfFv-VmPG-5DRkFi', 'scsi-0QEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2', 'scsi-SQEMU_QEMU_HARDDISK_fdccfd9d-7310-474c-a0d9-9edfc2c702c2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5', 'scsi-SQEMU_QEMU_HARDDISK_8933e5be-3d9f-49f8-8e64-ba28ae06c2c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811840 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811844 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.811851 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811856 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811860 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811871 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53b632c4--9781--517b--ad8e--3b37c9789a01-osd--block--53b632c4--9781--517b--ad8e--3b37c9789a01', 'dm-uuid-LVM-UtCBhN7ekwDglfkwPU5DbbuGlpfvVLSwBka3LgpTl8Lccw3S0l125OrhR4Kqu1yj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5-osd--block--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5', 'dm-uuid-LVM-p6jyVjBaN36kCqNbczwHStJEw3wpSqPf2EJHcEOZJK3L7OfNvBvO6tOL8SFtY98W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811889 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811894 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811910 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part1', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part14', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part15', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part16', 'scsi-SQEMU_QEMU_HARDDISK_dda24dc0-b982-41a5-9f14-a27821313269-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811919 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8e839e97--cc3d--5431--ae91--f94b997cade9-osd--block--8e839e97--cc3d--5431--ae91--f94b997cade9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z5SOHc-LMIV-Hnzh-9Kru-F05l-1qWm-9j1z7i', 'scsi-0QEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81', 'scsi-SQEMU_QEMU_HARDDISK_2951de99-f35b-4f27-b1a6-63f5628a8d81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811932 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1191cd60--4b8c--5454--8e42--9818af3c2595-osd--block--1191cd60--4b8c--5454--8e42--9818af3c2595'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6GtL0i-ZCvM-RFD1-a3yQ-P1i5-4296-CIdOY0', 'scsi-0QEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35', 'scsi-SQEMU_QEMU_HARDDISK_ed26131c-3f0f-451a-b8c2-bbd32b81be35'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811951 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144', 'scsi-SQEMU_QEMU_HARDDISK_c4f16882-4bb9-4b45-98df-7e8f068d9144'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811964 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811968 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.811975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part1', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part14', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part15', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part16', 'scsi-SQEMU_QEMU_HARDDISK_b41579e6-9332-4319-8cbf-d77eb525d8df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.811995 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--53b632c4--9781--517b--ad8e--3b37c9789a01-osd--block--53b632c4--9781--517b--ad8e--3b37c9789a01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RcY2qy-IZnS-duNN-Nt07-lHNx-kgon-LcaO84', 'scsi-0QEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9', 'scsi-SQEMU_QEMU_HARDDISK_31f44141-6971-4db5-beb8-c246a91f5ce9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.812001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5-osd--block--ba1ebe02--3aa8--524d--8f69--e3cc70944ba5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QKTKdO-gmhA-eDdm-Bbme-bRgB-KIEK-fW57I9', 'scsi-0QEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057', 'scsi-SQEMU_QEMU_HARDDISK_fcdad7f2-a581-4945-a365-f13dc1f4f057'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.812005 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447', 'scsi-SQEMU_QEMU_HARDDISK_2cdbec4e-06c4-422d-9c10-82dc5d1a2447'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.812014 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-51-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:42:37.812021 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.812025 | orchestrator | 2025-06-03 15:42:37.812029 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-03 15:42:37.812033 | orchestrator | Tuesday 03 June 2025 15:40:40 +0000 (0:00:00.498) 0:00:17.278 ********** 2025-06-03 15:42:37.812037 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.812041 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.812044 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.812048 | orchestrator | 2025-06-03 15:42:37.812052 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-03 15:42:37.812056 | orchestrator | Tuesday 03 June 2025 15:40:40 +0000 (0:00:00.628) 0:00:17.907 ********** 2025-06-03 15:42:37.812059 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.812063 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.812067 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.812071 | orchestrator | 2025-06-03 15:42:37.812074 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-03 15:42:37.812078 | orchestrator | Tuesday 03 June 2025 15:40:41 +0000 (0:00:00.387) 0:00:18.295 ********** 2025-06-03 15:42:37.812082 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.812086 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.812089 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.812093 | orchestrator | 2025-06-03 15:42:37.812097 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-03 15:42:37.812101 | orchestrator | Tuesday 03 June 2025 15:40:41 +0000 (0:00:00.664) 0:00:18.960 ********** 2025-06-03 15:42:37.812105 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812108 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.812112 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.812116 | orchestrator | 2025-06-03 15:42:37.812120 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-03 15:42:37.812123 | orchestrator | Tuesday 03 June 2025 15:40:42 +0000 (0:00:00.305) 0:00:19.265 ********** 2025-06-03 15:42:37.812127 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812131 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.812135 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.812138 | orchestrator | 2025-06-03 15:42:37.812142 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-03 15:42:37.812146 | orchestrator | Tuesday 03 June 2025 15:40:42 +0000 (0:00:00.402) 0:00:19.667 ********** 2025-06-03 15:42:37.812150 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812153 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.812157 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.812161 | orchestrator | 2025-06-03 15:42:37.812168 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-03 15:42:37.812172 | orchestrator | Tuesday 03 June 2025 15:40:43 +0000 (0:00:00.555) 0:00:20.223 ********** 2025-06-03 15:42:37.812176 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-03 15:42:37.812179 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-03 15:42:37.812183 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-03 15:42:37.812187 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-03 15:42:37.812191 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-03 15:42:37.812195 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-03 15:42:37.812198 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-03 15:42:37.812202 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-03 15:42:37.812206 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-03 15:42:37.812210 | orchestrator | 2025-06-03 15:42:37.812213 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-03 15:42:37.812217 | orchestrator | Tuesday 03 June 2025 15:40:43 +0000 (0:00:00.858) 0:00:21.081 ********** 2025-06-03 15:42:37.812221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-03 15:42:37.812227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-03 15:42:37.812231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-03 15:42:37.812235 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-03 15:42:37.812238 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-03 15:42:37.812242 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-03 15:42:37.812246 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812250 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.812253 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-03 15:42:37.812257 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-03 15:42:37.812261 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-03 15:42:37.812264 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.812268 | orchestrator | 2025-06-03 15:42:37.812272 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-03 15:42:37.812276 | orchestrator | Tuesday 03 June 2025 15:40:44 +0000 (0:00:00.375) 0:00:21.457 ********** 2025-06-03 15:42:37.812280 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:42:37.812283 | orchestrator | 2025-06-03 15:42:37.812287 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-03 15:42:37.812291 | orchestrator | Tuesday 03 June 2025 15:40:44 +0000 (0:00:00.607) 0:00:22.064 ********** 2025-06-03 15:42:37.812295 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812299 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.812303 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.812306 | orchestrator | 2025-06-03 15:42:37.812313 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-03 15:42:37.812317 | orchestrator | Tuesday 03 June 2025 15:40:45 +0000 (0:00:00.331) 0:00:22.396 ********** 2025-06-03 15:42:37.812320 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812324 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.812328 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.812332 | orchestrator | 2025-06-03 15:42:37.812336 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-03 15:42:37.812339 | orchestrator | Tuesday 03 June 2025 15:40:45 +0000 (0:00:00.275) 0:00:22.671 ********** 2025-06-03 15:42:37.812343 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812347 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.812351 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:42:37.812354 | orchestrator | 2025-06-03 15:42:37.812358 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-03 15:42:37.812362 | orchestrator | Tuesday 03 June 2025 15:40:45 +0000 (0:00:00.284) 0:00:22.956 ********** 2025-06-03 15:42:37.812366 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.812369 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.812373 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.812377 | orchestrator | 2025-06-03 15:42:37.812381 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-03 15:42:37.812384 | orchestrator | Tuesday 03 June 2025 15:40:46 +0000 (0:00:00.621) 0:00:23.577 ********** 2025-06-03 15:42:37.812388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:42:37.812392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:42:37.812396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:42:37.812399 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812403 | orchestrator | 2025-06-03 15:42:37.812407 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-03 15:42:37.812411 | orchestrator | Tuesday 03 June 2025 15:40:46 +0000 (0:00:00.333) 0:00:23.911 ********** 2025-06-03 15:42:37.812417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:42:37.812421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:42:37.812425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:42:37.812429 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812432 | orchestrator | 2025-06-03 15:42:37.812436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-03 15:42:37.812440 | orchestrator | Tuesday 03 June 2025 15:40:47 +0000 (0:00:00.333) 0:00:24.244 ********** 2025-06-03 15:42:37.812444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:42:37.812460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:42:37.812464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:42:37.812468 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812472 | orchestrator | 2025-06-03 15:42:37.812478 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-03 15:42:37.812482 | orchestrator | Tuesday 03 June 2025 15:40:47 +0000 (0:00:00.337) 0:00:24.582 ********** 2025-06-03 15:42:37.812485 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:42:37.812489 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:42:37.812493 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:42:37.812497 | orchestrator | 2025-06-03 15:42:37.812500 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-03 15:42:37.812504 | orchestrator | Tuesday 03 June 2025 15:40:47 +0000 (0:00:00.271) 0:00:24.853 ********** 2025-06-03 15:42:37.812508 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-03 15:42:37.812512 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-03 15:42:37.812515 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-03 15:42:37.812519 | orchestrator | 2025-06-03 15:42:37.812523 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-03 15:42:37.812527 | orchestrator | Tuesday 03 June 2025 15:40:48 +0000 (0:00:00.434) 0:00:25.288 ********** 2025-06-03 15:42:37.812531 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-03 15:42:37.812534 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:42:37.812538 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:42:37.812542 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-03 15:42:37.812546 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-03 15:42:37.812549 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-03 15:42:37.812553 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-03 15:42:37.812557 | orchestrator | 2025-06-03 15:42:37.812561 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-03 15:42:37.812565 | orchestrator | Tuesday 03 June 2025 15:40:49 +0000 (0:00:00.973) 0:00:26.262 ********** 2025-06-03 15:42:37.812568 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-03 15:42:37.812572 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:42:37.812576 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:42:37.812579 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-03 15:42:37.812583 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-03 15:42:37.812587 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-03 15:42:37.812591 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-03 15:42:37.812594 | orchestrator | 2025-06-03 15:42:37.812600 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-03 15:42:37.812607 | orchestrator | Tuesday 03 June 2025 15:40:51 +0000 (0:00:01.999) 0:00:28.261 ********** 2025-06-03 15:42:37.812611 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:42:37.812615 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:42:37.812619 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-03 15:42:37.812623 | orchestrator | 2025-06-03 15:42:37.812626 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-03 15:42:37.812630 | orchestrator | Tuesday 03 June 2025 15:40:51 +0000 (0:00:00.360) 0:00:28.621 ********** 2025-06-03 15:42:37.812634 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:42:37.812639 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:42:37.812643 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:42:37.812647 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:42:37.812650 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:42:37.812654 | orchestrator | 2025-06-03 15:42:37.812660 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-03 15:42:37.812664 | orchestrator | Tuesday 03 June 2025 15:41:39 +0000 (0:00:48.032) 0:01:16.654 ********** 2025-06-03 15:42:37.812668 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812672 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812675 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812679 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812683 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812687 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812690 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-03 15:42:37.812694 | orchestrator | 2025-06-03 15:42:37.812698 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-03 15:42:37.812702 | orchestrator | Tuesday 03 June 2025 15:42:05 +0000 (0:00:25.850) 0:01:42.504 ********** 2025-06-03 15:42:37.812706 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812709 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812713 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812717 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812721 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812724 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812731 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:42:37.812735 | orchestrator | 2025-06-03 15:42:37.812739 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-03 15:42:37.812742 | orchestrator | Tuesday 03 June 2025 15:42:18 +0000 (0:00:12.906) 0:01:55.411 ********** 2025-06-03 15:42:37.812746 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812750 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:42:37.812754 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:42:37.812757 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812761 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:42:37.812765 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:42:37.812772 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812776 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:42:37.812780 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:42:37.812783 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812787 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:42:37.812791 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:42:37.812795 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812798 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:42:37.812802 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:42:37.812806 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:42:37.812810 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:42:37.812813 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:42:37.812817 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-03 15:42:37.812821 | orchestrator | 2025-06-03 15:42:37.812825 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:42:37.812828 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-03 15:42:37.812833 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-03 15:42:37.812837 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-03 15:42:37.812840 | orchestrator | 2025-06-03 15:42:37.812844 | orchestrator | 2025-06-03 15:42:37.812848 | orchestrator | 2025-06-03 15:42:37.812852 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:42:37.812855 | orchestrator | Tuesday 03 June 2025 15:42:36 +0000 (0:00:17.873) 0:02:13.285 ********** 2025-06-03 15:42:37.812859 | orchestrator | =============================================================================== 2025-06-03 15:42:37.812863 | orchestrator | create openstack pool(s) ----------------------------------------------- 48.03s 2025-06-03 15:42:37.812867 | orchestrator | generate keys ---------------------------------------------------------- 25.85s 2025-06-03 15:42:37.812873 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.87s 2025-06-03 15:42:37.812877 | orchestrator | get keys from monitors ------------------------------------------------- 12.91s 2025-06-03 15:42:37.812884 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.12s 2025-06-03 15:42:37.812888 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.00s 2025-06-03 15:42:37.812891 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.79s 2025-06-03 15:42:37.812895 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.97s 2025-06-03 15:42:37.812899 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.87s 2025-06-03 15:42:37.812903 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.86s 2025-06-03 15:42:37.812906 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2025-06-03 15:42:37.812910 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.79s 2025-06-03 15:42:37.812914 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.70s 2025-06-03 15:42:37.812918 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2025-06-03 15:42:37.812921 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.63s 2025-06-03 15:42:37.812925 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.62s 2025-06-03 15:42:37.812929 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.62s 2025-06-03 15:42:37.812933 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2025-06-03 15:42:37.812936 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.61s 2025-06-03 15:42:37.812940 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.56s 2025-06-03 15:42:37.812944 | orchestrator | 2025-06-03 15:42:37 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:37.812948 | orchestrator | 2025-06-03 15:42:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:40.866246 | orchestrator | 2025-06-03 15:42:40 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:40.867440 | orchestrator | 2025-06-03 15:42:40 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state STARTED 2025-06-03 15:42:40.869671 | orchestrator | 2025-06-03 15:42:40 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:40.869902 | orchestrator | 2025-06-03 15:42:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:43.919121 | orchestrator | 2025-06-03 15:42:43 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:43.920745 | orchestrator | 2025-06-03 15:42:43 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state STARTED 2025-06-03 15:42:43.922720 | orchestrator | 2025-06-03 15:42:43 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:43.922780 | orchestrator | 2025-06-03 15:42:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:46.971369 | orchestrator | 2025-06-03 15:42:46 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:46.971493 | orchestrator | 2025-06-03 15:42:46 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state STARTED 2025-06-03 15:42:46.973526 | orchestrator | 2025-06-03 15:42:46 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:46.973566 | orchestrator | 2025-06-03 15:42:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:50.026304 | orchestrator | 2025-06-03 15:42:50 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:50.036540 | orchestrator | 2025-06-03 15:42:50 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state STARTED 2025-06-03 15:42:50.041273 | orchestrator | 2025-06-03 15:42:50 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:50.042255 | orchestrator | 2025-06-03 15:42:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:53.090564 | orchestrator | 2025-06-03 15:42:53 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:53.092967 | orchestrator | 2025-06-03 15:42:53 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state STARTED 2025-06-03 15:42:53.095872 | orchestrator | 2025-06-03 15:42:53 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:53.096034 | orchestrator | 2025-06-03 15:42:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:56.149889 | orchestrator | 2025-06-03 15:42:56 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:56.153307 | orchestrator | 2025-06-03 15:42:56 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state STARTED 2025-06-03 15:42:56.156165 | orchestrator | 2025-06-03 15:42:56 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:56.156215 | orchestrator | 2025-06-03 15:42:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:59.205756 | orchestrator | 2025-06-03 15:42:59 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:42:59.207236 | orchestrator | 2025-06-03 15:42:59 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state STARTED 2025-06-03 15:42:59.209243 | orchestrator | 2025-06-03 15:42:59 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:42:59.209285 | orchestrator | 2025-06-03 15:42:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:02.258572 | orchestrator | 2025-06-03 15:43:02 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:02.261476 | orchestrator | 2025-06-03 15:43:02 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state STARTED 2025-06-03 15:43:02.264611 | orchestrator | 2025-06-03 15:43:02 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:02.264651 | orchestrator | 2025-06-03 15:43:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:05.325064 | orchestrator | 2025-06-03 15:43:05 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:05.330351 | orchestrator | 2025-06-03 15:43:05 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state STARTED 2025-06-03 15:43:05.333232 | orchestrator | 2025-06-03 15:43:05 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:05.333303 | orchestrator | 2025-06-03 15:43:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:08.388858 | orchestrator | 2025-06-03 15:43:08 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:08.389517 | orchestrator | 2025-06-03 15:43:08 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:08.391061 | orchestrator | 2025-06-03 15:43:08 | INFO  | Task 8d7530ae-cb90-48eb-810d-69168dfac2b6 is in state SUCCESS 2025-06-03 15:43:08.392262 | orchestrator | 2025-06-03 15:43:08 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:08.392540 | orchestrator | 2025-06-03 15:43:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:11.437976 | orchestrator | 2025-06-03 15:43:11 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:11.438357 | orchestrator | 2025-06-03 15:43:11 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:11.439243 | orchestrator | 2025-06-03 15:43:11 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:11.439276 | orchestrator | 2025-06-03 15:43:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:14.487827 | orchestrator | 2025-06-03 15:43:14 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:14.488650 | orchestrator | 2025-06-03 15:43:14 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:14.490095 | orchestrator | 2025-06-03 15:43:14 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:14.490314 | orchestrator | 2025-06-03 15:43:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:17.548627 | orchestrator | 2025-06-03 15:43:17 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:17.552563 | orchestrator | 2025-06-03 15:43:17 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:17.556472 | orchestrator | 2025-06-03 15:43:17 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:17.556544 | orchestrator | 2025-06-03 15:43:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:20.616634 | orchestrator | 2025-06-03 15:43:20 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:20.618925 | orchestrator | 2025-06-03 15:43:20 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:20.621050 | orchestrator | 2025-06-03 15:43:20 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:20.621094 | orchestrator | 2025-06-03 15:43:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:23.668129 | orchestrator | 2025-06-03 15:43:23 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:23.676033 | orchestrator | 2025-06-03 15:43:23 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:23.676127 | orchestrator | 2025-06-03 15:43:23 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:23.676147 | orchestrator | 2025-06-03 15:43:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:26.723322 | orchestrator | 2025-06-03 15:43:26 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:26.725161 | orchestrator | 2025-06-03 15:43:26 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:26.727488 | orchestrator | 2025-06-03 15:43:26 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:26.727539 | orchestrator | 2025-06-03 15:43:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:29.778277 | orchestrator | 2025-06-03 15:43:29 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:29.779003 | orchestrator | 2025-06-03 15:43:29 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:29.779770 | orchestrator | 2025-06-03 15:43:29 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:29.779816 | orchestrator | 2025-06-03 15:43:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:32.815838 | orchestrator | 2025-06-03 15:43:32 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:32.817125 | orchestrator | 2025-06-03 15:43:32 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:32.818186 | orchestrator | 2025-06-03 15:43:32 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:32.818250 | orchestrator | 2025-06-03 15:43:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:35.916387 | orchestrator | 2025-06-03 15:43:35 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:35.917926 | orchestrator | 2025-06-03 15:43:35 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:35.919538 | orchestrator | 2025-06-03 15:43:35 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:35.919950 | orchestrator | 2025-06-03 15:43:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:38.966292 | orchestrator | 2025-06-03 15:43:38 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:38.968537 | orchestrator | 2025-06-03 15:43:38 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:38.970639 | orchestrator | 2025-06-03 15:43:38 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:38.970705 | orchestrator | 2025-06-03 15:43:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:42.024949 | orchestrator | 2025-06-03 15:43:42 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:42.028811 | orchestrator | 2025-06-03 15:43:42 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:42.030824 | orchestrator | 2025-06-03 15:43:42 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state STARTED 2025-06-03 15:43:42.030865 | orchestrator | 2025-06-03 15:43:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:45.076145 | orchestrator | 2025-06-03 15:43:45 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:45.077110 | orchestrator | 2025-06-03 15:43:45 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:45.077812 | orchestrator | 2025-06-03 15:43:45 | INFO  | Task 61e5cd60-6809-4580-bbe4-82cc562867a6 is in state SUCCESS 2025-06-03 15:43:45.079732 | orchestrator | 2025-06-03 15:43:45.079814 | orchestrator | 2025-06-03 15:43:45.079828 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-03 15:43:45.080072 | orchestrator | 2025-06-03 15:43:45.080085 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-03 15:43:45.080106 | orchestrator | Tuesday 03 June 2025 15:42:40 +0000 (0:00:00.158) 0:00:00.158 ********** 2025-06-03 15:43:45.080112 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-03 15:43:45.080123 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-03 15:43:45.080139 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-03 15:43:45.080166 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:43:45.080176 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-03 15:43:45.080185 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-03 15:43:45.080194 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-03 15:43:45.080203 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-03 15:43:45.080213 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-03 15:43:45.080222 | orchestrator | 2025-06-03 15:43:45.080231 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-03 15:43:45.080265 | orchestrator | Tuesday 03 June 2025 15:42:45 +0000 (0:00:04.412) 0:00:04.571 ********** 2025-06-03 15:43:45.080276 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-03 15:43:45.080285 | orchestrator | 2025-06-03 15:43:45.080294 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-03 15:43:45.080302 | orchestrator | Tuesday 03 June 2025 15:42:46 +0000 (0:00:00.968) 0:00:05.539 ********** 2025-06-03 15:43:45.080311 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-03 15:43:45.080320 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-03 15:43:45.080330 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-03 15:43:45.080339 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:43:45.080349 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-03 15:43:45.080358 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-03 15:43:45.080377 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-03 15:43:45.080385 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-03 15:43:45.080437 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-03 15:43:45.080447 | orchestrator | 2025-06-03 15:43:45.080456 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-03 15:43:45.080464 | orchestrator | Tuesday 03 June 2025 15:42:59 +0000 (0:00:13.002) 0:00:18.542 ********** 2025-06-03 15:43:45.080474 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-03 15:43:45.080483 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-03 15:43:45.080493 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-03 15:43:45.080501 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:43:45.080512 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-03 15:43:45.080518 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-03 15:43:45.080523 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-03 15:43:45.080529 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-03 15:43:45.080534 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-03 15:43:45.080540 | orchestrator | 2025-06-03 15:43:45.080545 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:43:45.080551 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:43:45.080558 | orchestrator | 2025-06-03 15:43:45.080564 | orchestrator | 2025-06-03 15:43:45.080569 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:43:45.080574 | orchestrator | Tuesday 03 June 2025 15:43:05 +0000 (0:00:06.644) 0:00:25.187 ********** 2025-06-03 15:43:45.080580 | orchestrator | =============================================================================== 2025-06-03 15:43:45.080585 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.00s 2025-06-03 15:43:45.080591 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.64s 2025-06-03 15:43:45.080596 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.41s 2025-06-03 15:43:45.080601 | orchestrator | Create share directory -------------------------------------------------- 0.97s 2025-06-03 15:43:45.080607 | orchestrator | 2025-06-03 15:43:45.080612 | orchestrator | 2025-06-03 15:43:45.080617 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:43:45.080623 | orchestrator | 2025-06-03 15:43:45.080641 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:43:45.080656 | orchestrator | Tuesday 03 June 2025 15:41:56 +0000 (0:00:00.293) 0:00:00.293 ********** 2025-06-03 15:43:45.080665 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.080675 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.080683 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.080691 | orchestrator | 2025-06-03 15:43:45.080700 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:43:45.080709 | orchestrator | Tuesday 03 June 2025 15:41:56 +0000 (0:00:00.305) 0:00:00.598 ********** 2025-06-03 15:43:45.080722 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-03 15:43:45.080732 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-03 15:43:45.080741 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-03 15:43:45.080749 | orchestrator | 2025-06-03 15:43:45.080767 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-03 15:43:45.080776 | orchestrator | 2025-06-03 15:43:45.080785 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-03 15:43:45.080793 | orchestrator | Tuesday 03 June 2025 15:41:57 +0000 (0:00:00.424) 0:00:01.023 ********** 2025-06-03 15:43:45.080803 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:43:45.080813 | orchestrator | 2025-06-03 15:43:45.080822 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-03 15:43:45.080831 | orchestrator | Tuesday 03 June 2025 15:41:57 +0000 (0:00:00.543) 0:00:01.567 ********** 2025-06-03 15:43:45.080847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:45.080884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:45.080906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:45.080923 | orchestrator | 2025-06-03 15:43:45.080933 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-03 15:43:45.080942 | orchestrator | Tuesday 03 June 2025 15:41:58 +0000 (0:00:01.099) 0:00:02.666 ********** 2025-06-03 15:43:45.080951 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.080962 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.080968 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.080974 | orchestrator | 2025-06-03 15:43:45.080981 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-03 15:43:45.080987 | orchestrator | Tuesday 03 June 2025 15:41:59 +0000 (0:00:00.527) 0:00:03.193 ********** 2025-06-03 15:43:45.080993 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-03 15:43:45.081004 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-03 15:43:45.081011 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-03 15:43:45.081018 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-03 15:43:45.081024 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-03 15:43:45.081030 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-03 15:43:45.081036 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-03 15:43:45.081043 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-03 15:43:45.081053 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-03 15:43:45.081060 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-03 15:43:45.081066 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-03 15:43:45.081071 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-03 15:43:45.081077 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-03 15:43:45.081082 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-03 15:43:45.081087 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-03 15:43:45.081093 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-03 15:43:45.081098 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-03 15:43:45.081104 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-03 15:43:45.081109 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-03 15:43:45.081115 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-03 15:43:45.081120 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-03 15:43:45.081126 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-03 15:43:45.081131 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-03 15:43:45.081136 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-03 15:43:45.081144 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-03 15:43:45.081151 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-03 15:43:45.081157 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-03 15:43:45.081166 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-03 15:43:45.081172 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-03 15:43:45.081177 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-03 15:43:45.081183 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-03 15:43:45.081188 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-03 15:43:45.081194 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-03 15:43:45.081200 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-03 15:43:45.081205 | orchestrator | 2025-06-03 15:43:45.081211 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:45.081216 | orchestrator | Tuesday 03 June 2025 15:42:00 +0000 (0:00:00.702) 0:00:03.896 ********** 2025-06-03 15:43:45.081222 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.081227 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.081233 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.081238 | orchestrator | 2025-06-03 15:43:45.081244 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:45.081249 | orchestrator | Tuesday 03 June 2025 15:42:00 +0000 (0:00:00.366) 0:00:04.262 ********** 2025-06-03 15:43:45.081255 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081261 | orchestrator | 2025-06-03 15:43:45.081270 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:45.081275 | orchestrator | Tuesday 03 June 2025 15:42:00 +0000 (0:00:00.121) 0:00:04.383 ********** 2025-06-03 15:43:45.081281 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081286 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.081292 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.081297 | orchestrator | 2025-06-03 15:43:45.081303 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:45.081308 | orchestrator | Tuesday 03 June 2025 15:42:01 +0000 (0:00:00.497) 0:00:04.881 ********** 2025-06-03 15:43:45.081314 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.081319 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.081324 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.081330 | orchestrator | 2025-06-03 15:43:45.081339 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:45.081353 | orchestrator | Tuesday 03 June 2025 15:42:01 +0000 (0:00:00.311) 0:00:05.193 ********** 2025-06-03 15:43:45.081362 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081371 | orchestrator | 2025-06-03 15:43:45.081378 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:45.081387 | orchestrator | Tuesday 03 June 2025 15:42:01 +0000 (0:00:00.142) 0:00:05.336 ********** 2025-06-03 15:43:45.081415 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081424 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.081433 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.081442 | orchestrator | 2025-06-03 15:43:45.081450 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:45.081456 | orchestrator | Tuesday 03 June 2025 15:42:01 +0000 (0:00:00.269) 0:00:05.605 ********** 2025-06-03 15:43:45.081461 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.081467 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.081478 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.081483 | orchestrator | 2025-06-03 15:43:45.081489 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:45.081494 | orchestrator | Tuesday 03 June 2025 15:42:02 +0000 (0:00:00.297) 0:00:05.903 ********** 2025-06-03 15:43:45.081500 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081505 | orchestrator | 2025-06-03 15:43:45.081511 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:45.081516 | orchestrator | Tuesday 03 June 2025 15:42:02 +0000 (0:00:00.332) 0:00:06.235 ********** 2025-06-03 15:43:45.081522 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081527 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.081532 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.081538 | orchestrator | 2025-06-03 15:43:45.081543 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:45.081549 | orchestrator | Tuesday 03 June 2025 15:42:02 +0000 (0:00:00.338) 0:00:06.573 ********** 2025-06-03 15:43:45.081554 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.081559 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.081565 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.081571 | orchestrator | 2025-06-03 15:43:45.081576 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:45.081582 | orchestrator | Tuesday 03 June 2025 15:42:03 +0000 (0:00:00.332) 0:00:06.906 ********** 2025-06-03 15:43:45.081587 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081593 | orchestrator | 2025-06-03 15:43:45.081598 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:45.081604 | orchestrator | Tuesday 03 June 2025 15:42:03 +0000 (0:00:00.131) 0:00:07.038 ********** 2025-06-03 15:43:45.081610 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081615 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.081620 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.081626 | orchestrator | 2025-06-03 15:43:45.081631 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:45.081636 | orchestrator | Tuesday 03 June 2025 15:42:03 +0000 (0:00:00.287) 0:00:07.326 ********** 2025-06-03 15:43:45.081642 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.081647 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.081653 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.081658 | orchestrator | 2025-06-03 15:43:45.081664 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:45.081669 | orchestrator | Tuesday 03 June 2025 15:42:04 +0000 (0:00:00.508) 0:00:07.834 ********** 2025-06-03 15:43:45.081675 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081683 | orchestrator | 2025-06-03 15:43:45.081693 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:45.081703 | orchestrator | Tuesday 03 June 2025 15:42:04 +0000 (0:00:00.141) 0:00:07.975 ********** 2025-06-03 15:43:45.081717 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081726 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.081735 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.081743 | orchestrator | 2025-06-03 15:43:45.081752 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:45.081761 | orchestrator | Tuesday 03 June 2025 15:42:04 +0000 (0:00:00.324) 0:00:08.300 ********** 2025-06-03 15:43:45.081770 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.081779 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.081787 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.081795 | orchestrator | 2025-06-03 15:43:45.081803 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:45.081812 | orchestrator | Tuesday 03 June 2025 15:42:04 +0000 (0:00:00.340) 0:00:08.640 ********** 2025-06-03 15:43:45.081820 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081829 | orchestrator | 2025-06-03 15:43:45.081838 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:45.081857 | orchestrator | Tuesday 03 June 2025 15:42:05 +0000 (0:00:00.145) 0:00:08.786 ********** 2025-06-03 15:43:45.081866 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081875 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.081881 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.081886 | orchestrator | 2025-06-03 15:43:45.081892 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:45.081898 | orchestrator | Tuesday 03 June 2025 15:42:05 +0000 (0:00:00.499) 0:00:09.286 ********** 2025-06-03 15:43:45.081903 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.081914 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.081920 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.081926 | orchestrator | 2025-06-03 15:43:45.081931 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:45.081937 | orchestrator | Tuesday 03 June 2025 15:42:05 +0000 (0:00:00.318) 0:00:09.604 ********** 2025-06-03 15:43:45.081943 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081949 | orchestrator | 2025-06-03 15:43:45.081954 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:45.081959 | orchestrator | Tuesday 03 June 2025 15:42:05 +0000 (0:00:00.131) 0:00:09.736 ********** 2025-06-03 15:43:45.081965 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.081971 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.081976 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.081981 | orchestrator | 2025-06-03 15:43:45.081991 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:45.081997 | orchestrator | Tuesday 03 June 2025 15:42:06 +0000 (0:00:00.286) 0:00:10.023 ********** 2025-06-03 15:43:45.082002 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.082008 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.082060 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.082069 | orchestrator | 2025-06-03 15:43:45.082074 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:45.082080 | orchestrator | Tuesday 03 June 2025 15:42:06 +0000 (0:00:00.297) 0:00:10.320 ********** 2025-06-03 15:43:45.082086 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082107 | orchestrator | 2025-06-03 15:43:45.082113 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:45.082119 | orchestrator | Tuesday 03 June 2025 15:42:06 +0000 (0:00:00.127) 0:00:10.448 ********** 2025-06-03 15:43:45.082125 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082130 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.082135 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.082141 | orchestrator | 2025-06-03 15:43:45.082146 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:45.082152 | orchestrator | Tuesday 03 June 2025 15:42:07 +0000 (0:00:00.513) 0:00:10.962 ********** 2025-06-03 15:43:45.082158 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.082171 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.082177 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.082183 | orchestrator | 2025-06-03 15:43:45.082188 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:45.082194 | orchestrator | Tuesday 03 June 2025 15:42:07 +0000 (0:00:00.304) 0:00:11.266 ********** 2025-06-03 15:43:45.082199 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082205 | orchestrator | 2025-06-03 15:43:45.082210 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:45.082215 | orchestrator | Tuesday 03 June 2025 15:42:07 +0000 (0:00:00.118) 0:00:11.385 ********** 2025-06-03 15:43:45.082221 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082227 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.082232 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.082238 | orchestrator | 2025-06-03 15:43:45.082243 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:45.082255 | orchestrator | Tuesday 03 June 2025 15:42:07 +0000 (0:00:00.305) 0:00:11.691 ********** 2025-06-03 15:43:45.082260 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:45.082266 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:45.082271 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:45.082277 | orchestrator | 2025-06-03 15:43:45.082283 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:45.082288 | orchestrator | Tuesday 03 June 2025 15:42:08 +0000 (0:00:00.517) 0:00:12.209 ********** 2025-06-03 15:43:45.082293 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082299 | orchestrator | 2025-06-03 15:43:45.082304 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:45.082310 | orchestrator | Tuesday 03 June 2025 15:42:08 +0000 (0:00:00.148) 0:00:12.357 ********** 2025-06-03 15:43:45.082316 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082321 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.082327 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.082333 | orchestrator | 2025-06-03 15:43:45.082339 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-03 15:43:45.082344 | orchestrator | Tuesday 03 June 2025 15:42:08 +0000 (0:00:00.305) 0:00:12.662 ********** 2025-06-03 15:43:45.082350 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:43:45.082355 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:43:45.082360 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:43:45.082366 | orchestrator | 2025-06-03 15:43:45.082371 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-03 15:43:45.082377 | orchestrator | Tuesday 03 June 2025 15:42:10 +0000 (0:00:01.625) 0:00:14.287 ********** 2025-06-03 15:43:45.082382 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-03 15:43:45.082388 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-03 15:43:45.082451 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-03 15:43:45.082457 | orchestrator | 2025-06-03 15:43:45.082462 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-03 15:43:45.082468 | orchestrator | Tuesday 03 June 2025 15:42:12 +0000 (0:00:01.995) 0:00:16.283 ********** 2025-06-03 15:43:45.082474 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-03 15:43:45.082479 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-03 15:43:45.082486 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-03 15:43:45.082491 | orchestrator | 2025-06-03 15:43:45.082496 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-03 15:43:45.082507 | orchestrator | Tuesday 03 June 2025 15:42:15 +0000 (0:00:02.792) 0:00:19.075 ********** 2025-06-03 15:43:45.082513 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-03 15:43:45.082519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-03 15:43:45.082524 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-03 15:43:45.082530 | orchestrator | 2025-06-03 15:43:45.082535 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-03 15:43:45.082541 | orchestrator | Tuesday 03 June 2025 15:42:16 +0000 (0:00:01.569) 0:00:20.644 ********** 2025-06-03 15:43:45.082547 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082552 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.082566 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.082572 | orchestrator | 2025-06-03 15:43:45.082577 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-03 15:43:45.082583 | orchestrator | Tuesday 03 June 2025 15:42:17 +0000 (0:00:00.284) 0:00:20.928 ********** 2025-06-03 15:43:45.082593 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082599 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.082604 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.082610 | orchestrator | 2025-06-03 15:43:45.082615 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-03 15:43:45.082620 | orchestrator | Tuesday 03 June 2025 15:42:17 +0000 (0:00:00.344) 0:00:21.273 ********** 2025-06-03 15:43:45.082626 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:43:45.082631 | orchestrator | 2025-06-03 15:43:45.082637 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-03 15:43:45.082642 | orchestrator | Tuesday 03 June 2025 15:42:18 +0000 (0:00:00.855) 0:00:22.128 ********** 2025-06-03 15:43:45.082649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:45.082669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:45.082681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:45.082687 | orchestrator | 2025-06-03 15:43:45.082693 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-03 15:43:45.082698 | orchestrator | Tuesday 03 June 2025 15:42:19 +0000 (0:00:01.522) 0:00:23.651 ********** 2025-06-03 15:43:45.082713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:45.082729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:45.082736 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.082742 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:45.082771 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.082781 | orchestrator | 2025-06-03 15:43:45.082797 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-03 15:43:45.082807 | orchestrator | Tuesday 03 June 2025 15:42:20 +0000 (0:00:00.655) 0:00:24.307 ********** 2025-06-03 15:43:45.082830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:45.082849 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:45.082868 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.082884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:45.082896 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.082901 | orchestrator | 2025-06-03 15:43:45.082907 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-03 15:43:45.082913 | orchestrator | Tuesday 03 June 2025 15:42:21 +0000 (0:00:01.030) 0:00:25.337 ********** 2025-06-03 15:43:45.082918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:45.082946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:45.082958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:45.082964 | orchestrator | 2025-06-03 15:43:45.082970 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-03 15:43:45.082975 | orchestrator | Tuesday 03 June 2025 15:42:22 +0000 (0:00:01.337) 0:00:26.674 ********** 2025-06-03 15:43:45.082985 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:45.082990 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:45.082996 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:45.083001 | orchestrator | 2025-06-03 15:43:45.083007 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-03 15:43:45.083012 | orchestrator | Tuesday 03 June 2025 15:42:23 +0000 (0:00:00.345) 0:00:27.020 ********** 2025-06-03 15:43:45.083022 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:43:45.083028 | orchestrator | 2025-06-03 15:43:45.083033 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-03 15:43:45.083042 | orchestrator | Tuesday 03 June 2025 15:42:24 +0000 (0:00:00.737) 0:00:27.757 ********** 2025-06-03 15:43:45.083054 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:43:45.083068 | orchestrator | 2025-06-03 15:43:45.083077 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-03 15:43:45.083086 | orchestrator | Tuesday 03 June 2025 15:42:26 +0000 (0:00:02.545) 0:00:30.303 ********** 2025-06-03 15:43:45.083095 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:43:45.083104 | orchestrator | 2025-06-03 15:43:45.083114 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-03 15:43:45.083124 | orchestrator | Tuesday 03 June 2025 15:42:28 +0000 (0:00:02.410) 0:00:32.714 ********** 2025-06-03 15:43:45.083142 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:43:45.083153 | orchestrator | 2025-06-03 15:43:45.083163 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-03 15:43:45.083174 | orchestrator | Tuesday 03 June 2025 15:42:46 +0000 (0:00:17.183) 0:00:49.897 ********** 2025-06-03 15:43:45.083180 | orchestrator | 2025-06-03 15:43:45.083186 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-03 15:43:45.083192 | orchestrator | Tuesday 03 June 2025 15:42:46 +0000 (0:00:00.063) 0:00:49.961 ********** 2025-06-03 15:43:45.083197 | orchestrator | 2025-06-03 15:43:45.083203 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-03 15:43:45.083209 | orchestrator | Tuesday 03 June 2025 15:42:46 +0000 (0:00:00.065) 0:00:50.026 ********** 2025-06-03 15:43:45.083214 | orchestrator | 2025-06-03 15:43:45.083219 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-03 15:43:45.083225 | orchestrator | Tuesday 03 June 2025 15:42:46 +0000 (0:00:00.066) 0:00:50.092 ********** 2025-06-03 15:43:45.083230 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:43:45.083236 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:43:45.083242 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:43:45.083251 | orchestrator | 2025-06-03 15:43:45.083260 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:43:45.083270 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-03 15:43:45.083280 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-03 15:43:45.083289 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-03 15:43:45.083298 | orchestrator | 2025-06-03 15:43:45.083307 | orchestrator | 2025-06-03 15:43:45.083316 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:43:45.083325 | orchestrator | Tuesday 03 June 2025 15:43:44 +0000 (0:00:58.215) 0:01:48.308 ********** 2025-06-03 15:43:45.083335 | orchestrator | =============================================================================== 2025-06-03 15:43:45.083345 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.22s 2025-06-03 15:43:45.083354 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.18s 2025-06-03 15:43:45.083372 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.79s 2025-06-03 15:43:45.083382 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.55s 2025-06-03 15:43:45.083415 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.41s 2025-06-03 15:43:45.083426 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.00s 2025-06-03 15:43:45.083436 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.63s 2025-06-03 15:43:45.083446 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.57s 2025-06-03 15:43:45.083456 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.52s 2025-06-03 15:43:45.083465 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.34s 2025-06-03 15:43:45.083475 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.10s 2025-06-03 15:43:45.083484 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.03s 2025-06-03 15:43:45.083493 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.85s 2025-06-03 15:43:45.083503 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-06-03 15:43:45.083513 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-06-03 15:43:45.083531 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2025-06-03 15:43:45.083542 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-06-03 15:43:45.083552 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.53s 2025-06-03 15:43:45.083561 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-06-03 15:43:45.083570 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-06-03 15:43:45.083578 | orchestrator | 2025-06-03 15:43:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:48.123604 | orchestrator | 2025-06-03 15:43:48 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:48.125089 | orchestrator | 2025-06-03 15:43:48 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:48.125126 | orchestrator | 2025-06-03 15:43:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:51.162968 | orchestrator | 2025-06-03 15:43:51 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:51.163045 | orchestrator | 2025-06-03 15:43:51 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:51.163052 | orchestrator | 2025-06-03 15:43:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:54.208269 | orchestrator | 2025-06-03 15:43:54 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:54.211077 | orchestrator | 2025-06-03 15:43:54 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:54.211184 | orchestrator | 2025-06-03 15:43:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:57.258575 | orchestrator | 2025-06-03 15:43:57 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:43:57.260280 | orchestrator | 2025-06-03 15:43:57 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:43:57.260326 | orchestrator | 2025-06-03 15:43:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:00.307045 | orchestrator | 2025-06-03 15:44:00 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:00.307883 | orchestrator | 2025-06-03 15:44:00 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:44:00.307973 | orchestrator | 2025-06-03 15:44:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:03.358352 | orchestrator | 2025-06-03 15:44:03 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:03.360554 | orchestrator | 2025-06-03 15:44:03 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state STARTED 2025-06-03 15:44:03.360618 | orchestrator | 2025-06-03 15:44:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:06.425643 | orchestrator | 2025-06-03 15:44:06 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:06.426792 | orchestrator | 2025-06-03 15:44:06 | INFO  | Task a16c0d6e-c479-43bf-9f37-7b43270f92ad is in state SUCCESS 2025-06-03 15:44:06.429710 | orchestrator | 2025-06-03 15:44:06 | INFO  | Task 991c7f0c-2f07-42ab-8ed3-42645deee406 is in state STARTED 2025-06-03 15:44:06.432790 | orchestrator | 2025-06-03 15:44:06 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:06.435627 | orchestrator | 2025-06-03 15:44:06 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:06.437030 | orchestrator | 2025-06-03 15:44:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:09.504988 | orchestrator | 2025-06-03 15:44:09 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:09.506542 | orchestrator | 2025-06-03 15:44:09 | INFO  | Task 991c7f0c-2f07-42ab-8ed3-42645deee406 is in state STARTED 2025-06-03 15:44:09.507215 | orchestrator | 2025-06-03 15:44:09 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:09.508294 | orchestrator | 2025-06-03 15:44:09 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:09.508496 | orchestrator | 2025-06-03 15:44:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:12.564659 | orchestrator | 2025-06-03 15:44:12 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:12.568031 | orchestrator | 2025-06-03 15:44:12 | INFO  | Task 991c7f0c-2f07-42ab-8ed3-42645deee406 is in state SUCCESS 2025-06-03 15:44:12.568653 | orchestrator | 2025-06-03 15:44:12 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:12.569781 | orchestrator | 2025-06-03 15:44:12 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:12.571470 | orchestrator | 2025-06-03 15:44:12 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:12.574118 | orchestrator | 2025-06-03 15:44:12 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:12.574173 | orchestrator | 2025-06-03 15:44:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:15.630287 | orchestrator | 2025-06-03 15:44:15 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:15.630403 | orchestrator | 2025-06-03 15:44:15 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:15.630415 | orchestrator | 2025-06-03 15:44:15 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:15.630422 | orchestrator | 2025-06-03 15:44:15 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:15.630428 | orchestrator | 2025-06-03 15:44:15 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:15.630436 | orchestrator | 2025-06-03 15:44:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:18.668572 | orchestrator | 2025-06-03 15:44:18 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:18.669017 | orchestrator | 2025-06-03 15:44:18 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:18.669955 | orchestrator | 2025-06-03 15:44:18 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:18.676056 | orchestrator | 2025-06-03 15:44:18 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:18.676118 | orchestrator | 2025-06-03 15:44:18 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:18.676132 | orchestrator | 2025-06-03 15:44:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:21.747519 | orchestrator | 2025-06-03 15:44:21 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:21.750596 | orchestrator | 2025-06-03 15:44:21 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:21.751456 | orchestrator | 2025-06-03 15:44:21 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:21.753196 | orchestrator | 2025-06-03 15:44:21 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:21.754798 | orchestrator | 2025-06-03 15:44:21 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:21.754856 | orchestrator | 2025-06-03 15:44:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:24.813178 | orchestrator | 2025-06-03 15:44:24 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:24.814380 | orchestrator | 2025-06-03 15:44:24 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:24.815917 | orchestrator | 2025-06-03 15:44:24 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:24.817311 | orchestrator | 2025-06-03 15:44:24 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:24.819106 | orchestrator | 2025-06-03 15:44:24 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:24.819178 | orchestrator | 2025-06-03 15:44:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:27.859256 | orchestrator | 2025-06-03 15:44:27 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:27.862311 | orchestrator | 2025-06-03 15:44:27 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:27.866416 | orchestrator | 2025-06-03 15:44:27 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:27.868244 | orchestrator | 2025-06-03 15:44:27 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:27.872143 | orchestrator | 2025-06-03 15:44:27 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:27.872555 | orchestrator | 2025-06-03 15:44:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:30.921343 | orchestrator | 2025-06-03 15:44:30 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:30.921516 | orchestrator | 2025-06-03 15:44:30 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:30.923530 | orchestrator | 2025-06-03 15:44:30 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:30.926290 | orchestrator | 2025-06-03 15:44:30 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:30.927345 | orchestrator | 2025-06-03 15:44:30 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:30.927460 | orchestrator | 2025-06-03 15:44:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:33.962905 | orchestrator | 2025-06-03 15:44:33 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state STARTED 2025-06-03 15:44:33.964687 | orchestrator | 2025-06-03 15:44:33 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:33.966648 | orchestrator | 2025-06-03 15:44:33 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:33.968821 | orchestrator | 2025-06-03 15:44:33 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:33.971654 | orchestrator | 2025-06-03 15:44:33 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:33.972091 | orchestrator | 2025-06-03 15:44:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:37.015277 | orchestrator | 2025-06-03 15:44:37.015899 | orchestrator | 2025-06-03 15:44:37.015993 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-03 15:44:37.016063 | orchestrator | 2025-06-03 15:44:37.016078 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-03 15:44:37.016092 | orchestrator | Tuesday 03 June 2025 15:43:09 +0000 (0:00:00.228) 0:00:00.228 ********** 2025-06-03 15:44:37.016105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-03 15:44:37.016119 | orchestrator | 2025-06-03 15:44:37.016133 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-03 15:44:37.016145 | orchestrator | Tuesday 03 June 2025 15:43:10 +0000 (0:00:00.199) 0:00:00.428 ********** 2025-06-03 15:44:37.016158 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-03 15:44:37.016172 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-03 15:44:37.016184 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-03 15:44:37.016194 | orchestrator | 2025-06-03 15:44:37.016202 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-03 15:44:37.016208 | orchestrator | Tuesday 03 June 2025 15:43:11 +0000 (0:00:01.252) 0:00:01.680 ********** 2025-06-03 15:44:37.016216 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-03 15:44:37.016223 | orchestrator | 2025-06-03 15:44:37.016229 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-03 15:44:37.016236 | orchestrator | Tuesday 03 June 2025 15:43:12 +0000 (0:00:01.196) 0:00:02.877 ********** 2025-06-03 15:44:37.016243 | orchestrator | changed: [testbed-manager] 2025-06-03 15:44:37.016249 | orchestrator | 2025-06-03 15:44:37.016258 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-03 15:44:37.016268 | orchestrator | Tuesday 03 June 2025 15:43:13 +0000 (0:00:01.101) 0:00:03.978 ********** 2025-06-03 15:44:37.016279 | orchestrator | changed: [testbed-manager] 2025-06-03 15:44:37.016289 | orchestrator | 2025-06-03 15:44:37.016306 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-03 15:44:37.016319 | orchestrator | Tuesday 03 June 2025 15:43:14 +0000 (0:00:00.949) 0:00:04.927 ********** 2025-06-03 15:44:37.016329 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-03 15:44:37.016340 | orchestrator | ok: [testbed-manager] 2025-06-03 15:44:37.016369 | orchestrator | 2025-06-03 15:44:37.016380 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-03 15:44:37.016391 | orchestrator | Tuesday 03 June 2025 15:43:55 +0000 (0:00:40.530) 0:00:45.458 ********** 2025-06-03 15:44:37.016401 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-03 15:44:37.016412 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-03 15:44:37.016449 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-03 15:44:37.016461 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-03 15:44:37.016472 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-03 15:44:37.016483 | orchestrator | 2025-06-03 15:44:37.016494 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-03 15:44:37.016505 | orchestrator | Tuesday 03 June 2025 15:43:59 +0000 (0:00:04.095) 0:00:49.553 ********** 2025-06-03 15:44:37.016512 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-03 15:44:37.016519 | orchestrator | 2025-06-03 15:44:37.016526 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-03 15:44:37.016533 | orchestrator | Tuesday 03 June 2025 15:43:59 +0000 (0:00:00.452) 0:00:50.006 ********** 2025-06-03 15:44:37.016540 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:44:37.016546 | orchestrator | 2025-06-03 15:44:37.016553 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-03 15:44:37.016559 | orchestrator | Tuesday 03 June 2025 15:43:59 +0000 (0:00:00.135) 0:00:50.142 ********** 2025-06-03 15:44:37.016566 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:44:37.016573 | orchestrator | 2025-06-03 15:44:37.016579 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-03 15:44:37.016586 | orchestrator | Tuesday 03 June 2025 15:44:00 +0000 (0:00:00.328) 0:00:50.471 ********** 2025-06-03 15:44:37.016592 | orchestrator | changed: [testbed-manager] 2025-06-03 15:44:37.016599 | orchestrator | 2025-06-03 15:44:37.016606 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-03 15:44:37.016612 | orchestrator | Tuesday 03 June 2025 15:44:01 +0000 (0:00:01.657) 0:00:52.128 ********** 2025-06-03 15:44:37.016619 | orchestrator | changed: [testbed-manager] 2025-06-03 15:44:37.016625 | orchestrator | 2025-06-03 15:44:37.016632 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-03 15:44:37.016638 | orchestrator | Tuesday 03 June 2025 15:44:02 +0000 (0:00:00.743) 0:00:52.872 ********** 2025-06-03 15:44:37.016645 | orchestrator | changed: [testbed-manager] 2025-06-03 15:44:37.016651 | orchestrator | 2025-06-03 15:44:37.016658 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-03 15:44:37.016665 | orchestrator | Tuesday 03 June 2025 15:44:03 +0000 (0:00:00.632) 0:00:53.504 ********** 2025-06-03 15:44:37.016672 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-03 15:44:37.016678 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-03 15:44:37.016685 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-03 15:44:37.016692 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-03 15:44:37.016698 | orchestrator | 2025-06-03 15:44:37.016705 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:44:37.016712 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:44:37.016719 | orchestrator | 2025-06-03 15:44:37.016726 | orchestrator | 2025-06-03 15:44:37.016785 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:44:37.016800 | orchestrator | Tuesday 03 June 2025 15:44:04 +0000 (0:00:01.467) 0:00:54.972 ********** 2025-06-03 15:44:37.016807 | orchestrator | =============================================================================== 2025-06-03 15:44:37.016813 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.53s 2025-06-03 15:44:37.016820 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.10s 2025-06-03 15:44:37.016826 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.66s 2025-06-03 15:44:37.016833 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.47s 2025-06-03 15:44:37.016839 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2025-06-03 15:44:37.016846 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.20s 2025-06-03 15:44:37.016860 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.10s 2025-06-03 15:44:37.016866 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2025-06-03 15:44:37.016873 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2025-06-03 15:44:37.016879 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.63s 2025-06-03 15:44:37.016886 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-06-03 15:44:37.016892 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.33s 2025-06-03 15:44:37.016899 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-06-03 15:44:37.016905 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-06-03 15:44:37.016912 | orchestrator | 2025-06-03 15:44:37.016918 | orchestrator | 2025-06-03 15:44:37.016925 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:44:37.016932 | orchestrator | 2025-06-03 15:44:37.016938 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:44:37.016945 | orchestrator | Tuesday 03 June 2025 15:44:08 +0000 (0:00:00.178) 0:00:00.178 ********** 2025-06-03 15:44:37.016952 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:37.016958 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:37.016965 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:37.016971 | orchestrator | 2025-06-03 15:44:37.016978 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:44:37.016985 | orchestrator | Tuesday 03 June 2025 15:44:09 +0000 (0:00:00.362) 0:00:00.540 ********** 2025-06-03 15:44:37.016991 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-03 15:44:37.016998 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-03 15:44:37.017004 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-03 15:44:37.017011 | orchestrator | 2025-06-03 15:44:37.017017 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-03 15:44:37.017024 | orchestrator | 2025-06-03 15:44:37.017030 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-03 15:44:37.017037 | orchestrator | Tuesday 03 June 2025 15:44:09 +0000 (0:00:00.700) 0:00:01.241 ********** 2025-06-03 15:44:37.017043 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:37.017050 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:37.017056 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:37.017063 | orchestrator | 2025-06-03 15:44:37.017069 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:44:37.017077 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:44:37.017087 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:44:37.017099 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:44:37.017110 | orchestrator | 2025-06-03 15:44:37.017121 | orchestrator | 2025-06-03 15:44:37.017133 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:44:37.017144 | orchestrator | Tuesday 03 June 2025 15:44:10 +0000 (0:00:00.732) 0:00:01.973 ********** 2025-06-03 15:44:37.017156 | orchestrator | =============================================================================== 2025-06-03 15:44:37.017167 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.73s 2025-06-03 15:44:37.017179 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-06-03 15:44:37.017191 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-06-03 15:44:37.017203 | orchestrator | 2025-06-03 15:44:37.017215 | orchestrator | 2025-06-03 15:44:37.017223 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:44:37.017236 | orchestrator | 2025-06-03 15:44:37.017242 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:44:37.017249 | orchestrator | Tuesday 03 June 2025 15:41:56 +0000 (0:00:00.258) 0:00:00.258 ********** 2025-06-03 15:44:37.017256 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:37.017262 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:37.017269 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:37.017275 | orchestrator | 2025-06-03 15:44:37.017282 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:44:37.017290 | orchestrator | Tuesday 03 June 2025 15:41:56 +0000 (0:00:00.287) 0:00:00.545 ********** 2025-06-03 15:44:37.017301 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-03 15:44:37.017311 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-03 15:44:37.017322 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-03 15:44:37.017334 | orchestrator | 2025-06-03 15:44:37.017391 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-03 15:44:37.017400 | orchestrator | 2025-06-03 15:44:37.017440 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:37.017448 | orchestrator | Tuesday 03 June 2025 15:41:57 +0000 (0:00:00.440) 0:00:00.986 ********** 2025-06-03 15:44:37.017455 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:44:37.017462 | orchestrator | 2025-06-03 15:44:37.017469 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-03 15:44:37.017475 | orchestrator | Tuesday 03 June 2025 15:41:57 +0000 (0:00:00.541) 0:00:01.528 ********** 2025-06-03 15:44:37.017486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.017497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.017505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.017540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017590 | orchestrator | 2025-06-03 15:44:37.017597 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-03 15:44:37.017604 | orchestrator | Tuesday 03 June 2025 15:41:59 +0000 (0:00:01.788) 0:00:03.316 ********** 2025-06-03 15:44:37.017611 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-03 15:44:37.017618 | orchestrator | 2025-06-03 15:44:37.017625 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-03 15:44:37.017631 | orchestrator | Tuesday 03 June 2025 15:42:00 +0000 (0:00:00.922) 0:00:04.238 ********** 2025-06-03 15:44:37.017638 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:37.017644 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:37.017651 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:37.017658 | orchestrator | 2025-06-03 15:44:37.017664 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-03 15:44:37.017671 | orchestrator | Tuesday 03 June 2025 15:42:01 +0000 (0:00:00.505) 0:00:04.744 ********** 2025-06-03 15:44:37.017678 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:44:37.017684 | orchestrator | 2025-06-03 15:44:37.017691 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:37.017704 | orchestrator | Tuesday 03 June 2025 15:42:01 +0000 (0:00:00.693) 0:00:05.437 ********** 2025-06-03 15:44:37.017712 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:44:37.017718 | orchestrator | 2025-06-03 15:44:37.017725 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-03 15:44:37.017732 | orchestrator | Tuesday 03 June 2025 15:42:02 +0000 (0:00:00.542) 0:00:05.979 ********** 2025-06-03 15:44:37.017739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.017747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.017760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.017768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.017862 | orchestrator | 2025-06-03 15:44:37.017873 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-03 15:44:37.017883 | orchestrator | Tuesday 03 June 2025 15:42:05 +0000 (0:00:03.535) 0:00:09.515 ********** 2025-06-03 15:44:37.017907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:37.017920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.017931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:37.017942 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.017967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:37.017977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.017984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:37.017990 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:37.018007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:37.018098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.018114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:37.018120 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:37.018127 | orchestrator | 2025-06-03 15:44:37.018133 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-03 15:44:37.018139 | orchestrator | Tuesday 03 June 2025 15:42:06 +0000 (0:00:00.596) 0:00:10.112 ********** 2025-06-03 15:44:37.018146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:37.018153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.018187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fe2025-06-03 15:44:37 | INFO  | Task c347d7f9-0135-4d0f-b5d7-a424bd011720 is in state SUCCESS 2025-06-03 15:44:37.018197 | orchestrator | rnet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:37.018221 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.018229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:37.018242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.018249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:37.018255 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:37.018335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:37.018376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.018384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:37.018396 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:37.018403 | orchestrator | 2025-06-03 15:44:37.018409 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-03 15:44:37.018415 | orchestrator | Tuesday 03 June 2025 15:42:07 +0000 (0:00:00.767) 0:00:10.879 ********** 2025-06-03 15:44:37.018422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.018429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.018445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.018452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018496 | orchestrator | 2025-06-03 15:44:37.018502 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-03 15:44:37.018515 | orchestrator | Tuesday 03 June 2025 15:42:10 +0000 (0:00:03.642) 0:00:14.522 ********** 2025-06-03 15:44:37.018522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.018535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.018542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.018549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.018563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.018576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.018582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018602 | orchestrator | 2025-06-03 15:44:37.018608 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-03 15:44:37.018614 | orchestrator | Tuesday 03 June 2025 15:42:16 +0000 (0:00:05.336) 0:00:19.859 ********** 2025-06-03 15:44:37.018620 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:37.018627 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:44:37.018633 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:44:37.018639 | orchestrator | 2025-06-03 15:44:37.018645 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-03 15:44:37.018651 | orchestrator | Tuesday 03 June 2025 15:42:17 +0000 (0:00:01.583) 0:00:21.443 ********** 2025-06-03 15:44:37.018657 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.018663 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:37.018669 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:37.018675 | orchestrator | 2025-06-03 15:44:37.018681 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-03 15:44:37.018687 | orchestrator | Tuesday 03 June 2025 15:42:18 +0000 (0:00:00.494) 0:00:21.937 ********** 2025-06-03 15:44:37.018693 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.018699 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:37.018710 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:37.018716 | orchestrator | 2025-06-03 15:44:37.018722 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-03 15:44:37.018728 | orchestrator | Tuesday 03 June 2025 15:42:18 +0000 (0:00:00.433) 0:00:22.370 ********** 2025-06-03 15:44:37.018735 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.018741 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:37.018750 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:37.018760 | orchestrator | 2025-06-03 15:44:37.018770 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-03 15:44:37.018785 | orchestrator | Tuesday 03 June 2025 15:42:19 +0000 (0:00:00.340) 0:00:22.711 ********** 2025-06-03 15:44:37.018800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.018810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.018820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.018830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.018862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.018874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:37.018884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.018916 | orchestrator | 2025-06-03 15:44:37.018922 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:37.018928 | orchestrator | Tuesday 03 June 2025 15:42:21 +0000 (0:00:02.572) 0:00:25.283 ********** 2025-06-03 15:44:37.018934 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.018945 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:37.018953 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:37.018963 | orchestrator | 2025-06-03 15:44:37.018973 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-03 15:44:37.018984 | orchestrator | Tuesday 03 June 2025 15:42:21 +0000 (0:00:00.331) 0:00:25.615 ********** 2025-06-03 15:44:37.018993 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-03 15:44:37.019004 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-03 15:44:37.019013 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-03 15:44:37.019023 | orchestrator | 2025-06-03 15:44:37.019033 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-03 15:44:37.019044 | orchestrator | Tuesday 03 June 2025 15:42:24 +0000 (0:00:02.092) 0:00:27.707 ********** 2025-06-03 15:44:37.019054 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:44:37.019064 | orchestrator | 2025-06-03 15:44:37.019074 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-03 15:44:37.019084 | orchestrator | Tuesday 03 June 2025 15:42:25 +0000 (0:00:01.038) 0:00:28.746 ********** 2025-06-03 15:44:37.019095 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.019105 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:37.019116 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:37.019125 | orchestrator | 2025-06-03 15:44:37.019136 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-03 15:44:37.019152 | orchestrator | Tuesday 03 June 2025 15:42:25 +0000 (0:00:00.521) 0:00:29.268 ********** 2025-06-03 15:44:37.019159 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-03 15:44:37.019165 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-03 15:44:37.019171 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:44:37.019177 | orchestrator | 2025-06-03 15:44:37.019183 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-03 15:44:37.019190 | orchestrator | Tuesday 03 June 2025 15:42:26 +0000 (0:00:01.144) 0:00:30.412 ********** 2025-06-03 15:44:37.019196 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:37.019202 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:37.019209 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:37.019215 | orchestrator | 2025-06-03 15:44:37.019221 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-03 15:44:37.019227 | orchestrator | Tuesday 03 June 2025 15:42:27 +0000 (0:00:00.312) 0:00:30.725 ********** 2025-06-03 15:44:37.019233 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-03 15:44:37.019239 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-03 15:44:37.019245 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-03 15:44:37.019251 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-03 15:44:37.019258 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-03 15:44:37.019264 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-03 15:44:37.019270 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-03 15:44:37.019276 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-03 15:44:37.019282 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-03 15:44:37.019289 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-03 15:44:37.019295 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-03 15:44:37.019306 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-03 15:44:37.019312 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-03 15:44:37.019318 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-03 15:44:37.019324 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-03 15:44:37.019330 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:44:37.019336 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:44:37.019389 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:44:37.019398 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:44:37.019404 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:44:37.019410 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:44:37.019416 | orchestrator | 2025-06-03 15:44:37.019422 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-03 15:44:37.019429 | orchestrator | Tuesday 03 June 2025 15:42:36 +0000 (0:00:09.155) 0:00:39.881 ********** 2025-06-03 15:44:37.019435 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:44:37.019441 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:44:37.019447 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:44:37.019453 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:44:37.019459 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:44:37.019465 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:44:37.019471 | orchestrator | 2025-06-03 15:44:37.019478 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-03 15:44:37.019484 | orchestrator | Tuesday 03 June 2025 15:42:39 +0000 (0:00:02.829) 0:00:42.710 ********** 2025-06-03 15:44:37.019501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.019509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.019521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:37.019528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.019534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.019549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:37.019556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.019567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.019574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:37.019580 | orchestrator | 2025-06-03 15:44:37.019586 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:37.019593 | orchestrator | Tuesday 03 June 2025 15:42:41 +0000 (0:00:02.371) 0:00:45.082 ********** 2025-06-03 15:44:37.019599 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.019605 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:37.019612 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:37.019618 | orchestrator | 2025-06-03 15:44:37.019624 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-03 15:44:37.019630 | orchestrator | Tuesday 03 June 2025 15:42:41 +0000 (0:00:00.283) 0:00:45.366 ********** 2025-06-03 15:44:37.019636 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:37.019642 | orchestrator | 2025-06-03 15:44:37.019648 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-03 15:44:37.019655 | orchestrator | Tuesday 03 June 2025 15:42:44 +0000 (0:00:02.492) 0:00:47.858 ********** 2025-06-03 15:44:37.019661 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:37.019667 | orchestrator | 2025-06-03 15:44:37.019673 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-03 15:44:37.019679 | orchestrator | Tuesday 03 June 2025 15:42:46 +0000 (0:00:02.755) 0:00:50.614 ********** 2025-06-03 15:44:37.019685 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:37.019692 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:37.019698 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:37.019704 | orchestrator | 2025-06-03 15:44:37.019710 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-03 15:44:37.019716 | orchestrator | Tuesday 03 June 2025 15:42:48 +0000 (0:00:01.076) 0:00:51.690 ********** 2025-06-03 15:44:37.019722 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:37.019729 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:37.019735 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:37.019741 | orchestrator | 2025-06-03 15:44:37.019747 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-03 15:44:37.019757 | orchestrator | Tuesday 03 June 2025 15:42:48 +0000 (0:00:00.391) 0:00:52.082 ********** 2025-06-03 15:44:37.019768 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.019779 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:37.019790 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:37.019800 | orchestrator | 2025-06-03 15:44:37.019809 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-03 15:44:37.019820 | orchestrator | Tuesday 03 June 2025 15:42:49 +0000 (0:00:00.602) 0:00:52.684 ********** 2025-06-03 15:44:37.019858 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:37.019868 | orchestrator | 2025-06-03 15:44:37.019878 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-03 15:44:37.020003 | orchestrator | Tuesday 03 June 2025 15:43:03 +0000 (0:00:14.853) 0:01:07.537 ********** 2025-06-03 15:44:37.020010 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:37.020015 | orchestrator | 2025-06-03 15:44:37.020021 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-03 15:44:37.020026 | orchestrator | Tuesday 03 June 2025 15:43:14 +0000 (0:00:11.105) 0:01:18.642 ********** 2025-06-03 15:44:37.020031 | orchestrator | 2025-06-03 15:44:37.020037 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-03 15:44:37.020042 | orchestrator | Tuesday 03 June 2025 15:43:15 +0000 (0:00:00.267) 0:01:18.910 ********** 2025-06-03 15:44:37.020048 | orchestrator | 2025-06-03 15:44:37.020053 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-03 15:44:37.020059 | orchestrator | Tuesday 03 June 2025 15:43:15 +0000 (0:00:00.066) 0:01:18.977 ********** 2025-06-03 15:44:37.020064 | orchestrator | 2025-06-03 15:44:37.020069 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-03 15:44:37.020075 | orchestrator | Tuesday 03 June 2025 15:43:15 +0000 (0:00:00.063) 0:01:19.040 ********** 2025-06-03 15:44:37.020080 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:37.020085 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:44:37.020091 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:44:37.020096 | orchestrator | 2025-06-03 15:44:37.020101 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-03 15:44:37.020107 | orchestrator | Tuesday 03 June 2025 15:43:31 +0000 (0:00:15.952) 0:01:34.993 ********** 2025-06-03 15:44:37.020112 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:37.020118 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:44:37.020123 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:44:37.020128 | orchestrator | 2025-06-03 15:44:37.020134 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-03 15:44:37.020139 | orchestrator | Tuesday 03 June 2025 15:43:37 +0000 (0:00:05.958) 0:01:40.952 ********** 2025-06-03 15:44:37.020145 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:37.020150 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:44:37.020155 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:44:37.020161 | orchestrator | 2025-06-03 15:44:37.020166 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:37.020171 | orchestrator | Tuesday 03 June 2025 15:43:43 +0000 (0:00:06.070) 0:01:47.022 ********** 2025-06-03 15:44:37.020177 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:44:37.020182 | orchestrator | 2025-06-03 15:44:37.020188 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-03 15:44:37.020193 | orchestrator | Tuesday 03 June 2025 15:43:44 +0000 (0:00:00.749) 0:01:47.772 ********** 2025-06-03 15:44:37.020199 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:37.020204 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:37.020209 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:37.020215 | orchestrator | 2025-06-03 15:44:37.020220 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-03 15:44:37.020226 | orchestrator | Tuesday 03 June 2025 15:43:44 +0000 (0:00:00.762) 0:01:48.534 ********** 2025-06-03 15:44:37.020231 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:37.020236 | orchestrator | 2025-06-03 15:44:37.020242 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-03 15:44:37.020247 | orchestrator | Tuesday 03 June 2025 15:43:46 +0000 (0:00:01.873) 0:01:50.408 ********** 2025-06-03 15:44:37.020253 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-03 15:44:37.020258 | orchestrator | 2025-06-03 15:44:37.020263 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-03 15:44:37.020274 | orchestrator | Tuesday 03 June 2025 15:43:58 +0000 (0:00:11.930) 0:02:02.339 ********** 2025-06-03 15:44:37.020280 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-03 15:44:37.020285 | orchestrator | 2025-06-03 15:44:37.020290 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-03 15:44:37.020296 | orchestrator | Tuesday 03 June 2025 15:44:21 +0000 (0:00:22.389) 0:02:24.728 ********** 2025-06-03 15:44:37.020301 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-03 15:44:37.020307 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-03 15:44:37.020312 | orchestrator | 2025-06-03 15:44:37.020318 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-03 15:44:37.020323 | orchestrator | Tuesday 03 June 2025 15:44:28 +0000 (0:00:07.783) 0:02:32.511 ********** 2025-06-03 15:44:37.020328 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.020334 | orchestrator | 2025-06-03 15:44:37.020339 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-03 15:44:37.020362 | orchestrator | Tuesday 03 June 2025 15:44:29 +0000 (0:00:00.707) 0:02:33.219 ********** 2025-06-03 15:44:37.020430 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.020438 | orchestrator | 2025-06-03 15:44:37.020443 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-03 15:44:37.020449 | orchestrator | Tuesday 03 June 2025 15:44:29 +0000 (0:00:00.183) 0:02:33.403 ********** 2025-06-03 15:44:37.020454 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.020460 | orchestrator | 2025-06-03 15:44:37.020465 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-03 15:44:37.020471 | orchestrator | Tuesday 03 June 2025 15:44:29 +0000 (0:00:00.156) 0:02:33.560 ********** 2025-06-03 15:44:37.020476 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.020481 | orchestrator | 2025-06-03 15:44:37.020487 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-03 15:44:37.020492 | orchestrator | Tuesday 03 June 2025 15:44:30 +0000 (0:00:00.322) 0:02:33.882 ********** 2025-06-03 15:44:37.020497 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:37.020503 | orchestrator | 2025-06-03 15:44:37.020508 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:37.020519 | orchestrator | Tuesday 03 June 2025 15:44:33 +0000 (0:00:03.468) 0:02:37.351 ********** 2025-06-03 15:44:37.020528 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:37.020534 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:37.020539 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:37.020545 | orchestrator | 2025-06-03 15:44:37.020550 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:44:37.020556 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-03 15:44:37.020563 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-03 15:44:37.020568 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-03 15:44:37.020574 | orchestrator | 2025-06-03 15:44:37.020579 | orchestrator | 2025-06-03 15:44:37.020585 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:44:37.020590 | orchestrator | Tuesday 03 June 2025 15:44:34 +0000 (0:00:00.526) 0:02:37.877 ********** 2025-06-03 15:44:37.020596 | orchestrator | =============================================================================== 2025-06-03 15:44:37.020601 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.39s 2025-06-03 15:44:37.020606 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 15.95s 2025-06-03 15:44:37.020617 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.85s 2025-06-03 15:44:37.020622 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.93s 2025-06-03 15:44:37.020628 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.11s 2025-06-03 15:44:37.020633 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.16s 2025-06-03 15:44:37.020639 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.78s 2025-06-03 15:44:37.020644 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.07s 2025-06-03 15:44:37.020649 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.96s 2025-06-03 15:44:37.020655 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.34s 2025-06-03 15:44:37.020660 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.64s 2025-06-03 15:44:37.020666 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.54s 2025-06-03 15:44:37.020671 | orchestrator | keystone : Creating default user role ----------------------------------- 3.47s 2025-06-03 15:44:37.020676 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.83s 2025-06-03 15:44:37.020682 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.76s 2025-06-03 15:44:37.020687 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.57s 2025-06-03 15:44:37.020692 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.49s 2025-06-03 15:44:37.020698 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.37s 2025-06-03 15:44:37.020703 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.09s 2025-06-03 15:44:37.020708 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.87s 2025-06-03 15:44:37.020714 | orchestrator | 2025-06-03 15:44:37 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:44:37.020719 | orchestrator | 2025-06-03 15:44:37 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:37.021242 | orchestrator | 2025-06-03 15:44:37 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:37.024979 | orchestrator | 2025-06-03 15:44:37 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:37.027232 | orchestrator | 2025-06-03 15:44:37 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:37.027275 | orchestrator | 2025-06-03 15:44:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:40.062279 | orchestrator | 2025-06-03 15:44:40 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:44:40.062502 | orchestrator | 2025-06-03 15:44:40 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:40.063465 | orchestrator | 2025-06-03 15:44:40 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:40.064060 | orchestrator | 2025-06-03 15:44:40 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:40.064601 | orchestrator | 2025-06-03 15:44:40 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:40.064670 | orchestrator | 2025-06-03 15:44:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:43.105700 | orchestrator | 2025-06-03 15:44:43 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:44:43.106181 | orchestrator | 2025-06-03 15:44:43 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:43.107088 | orchestrator | 2025-06-03 15:44:43 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:43.108205 | orchestrator | 2025-06-03 15:44:43 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:43.109213 | orchestrator | 2025-06-03 15:44:43 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:43.109472 | orchestrator | 2025-06-03 15:44:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:46.132088 | orchestrator | 2025-06-03 15:44:46 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:44:46.132299 | orchestrator | 2025-06-03 15:44:46 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:46.132986 | orchestrator | 2025-06-03 15:44:46 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:46.134175 | orchestrator | 2025-06-03 15:44:46 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:46.134198 | orchestrator | 2025-06-03 15:44:46 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:46.134203 | orchestrator | 2025-06-03 15:44:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:49.167459 | orchestrator | 2025-06-03 15:44:49 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:44:49.168604 | orchestrator | 2025-06-03 15:44:49 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:49.168992 | orchestrator | 2025-06-03 15:44:49 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:49.169816 | orchestrator | 2025-06-03 15:44:49 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:49.171469 | orchestrator | 2025-06-03 15:44:49 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:49.172552 | orchestrator | 2025-06-03 15:44:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:52.206486 | orchestrator | 2025-06-03 15:44:52 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:44:52.206590 | orchestrator | 2025-06-03 15:44:52 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:52.206604 | orchestrator | 2025-06-03 15:44:52 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:52.206616 | orchestrator | 2025-06-03 15:44:52 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state STARTED 2025-06-03 15:44:52.206873 | orchestrator | 2025-06-03 15:44:52 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:52.212064 | orchestrator | 2025-06-03 15:44:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:55.237676 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:44:55.238926 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:44:55.240431 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:55.244063 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:55.244118 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task 3295c5d1-0050-4f4a-aa3b-30d370aa142c is in state SUCCESS 2025-06-03 15:44:55.244130 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:55.244142 | orchestrator | 2025-06-03 15:44:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:58.279717 | orchestrator | 2025-06-03 15:44:58 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:44:58.279942 | orchestrator | 2025-06-03 15:44:58 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:44:58.280891 | orchestrator | 2025-06-03 15:44:58 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:44:58.281608 | orchestrator | 2025-06-03 15:44:58 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:44:58.283087 | orchestrator | 2025-06-03 15:44:58 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:44:58.284815 | orchestrator | 2025-06-03 15:44:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:01.320162 | orchestrator | 2025-06-03 15:45:01 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:01.320908 | orchestrator | 2025-06-03 15:45:01 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:01.322109 | orchestrator | 2025-06-03 15:45:01 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:01.322777 | orchestrator | 2025-06-03 15:45:01 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:01.323683 | orchestrator | 2025-06-03 15:45:01 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:01.323710 | orchestrator | 2025-06-03 15:45:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:04.356248 | orchestrator | 2025-06-03 15:45:04 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:04.356601 | orchestrator | 2025-06-03 15:45:04 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:04.357701 | orchestrator | 2025-06-03 15:45:04 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:04.358813 | orchestrator | 2025-06-03 15:45:04 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:04.360067 | orchestrator | 2025-06-03 15:45:04 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:04.360119 | orchestrator | 2025-06-03 15:45:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:07.407274 | orchestrator | 2025-06-03 15:45:07 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:07.408543 | orchestrator | 2025-06-03 15:45:07 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:07.408877 | orchestrator | 2025-06-03 15:45:07 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:07.410267 | orchestrator | 2025-06-03 15:45:07 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:07.411442 | orchestrator | 2025-06-03 15:45:07 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:07.411480 | orchestrator | 2025-06-03 15:45:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:10.452050 | orchestrator | 2025-06-03 15:45:10 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:10.452717 | orchestrator | 2025-06-03 15:45:10 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:10.453489 | orchestrator | 2025-06-03 15:45:10 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:10.456482 | orchestrator | 2025-06-03 15:45:10 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:10.457102 | orchestrator | 2025-06-03 15:45:10 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:10.457160 | orchestrator | 2025-06-03 15:45:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:13.499081 | orchestrator | 2025-06-03 15:45:13 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:13.500141 | orchestrator | 2025-06-03 15:45:13 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:13.502707 | orchestrator | 2025-06-03 15:45:13 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:13.506000 | orchestrator | 2025-06-03 15:45:13 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:13.506470 | orchestrator | 2025-06-03 15:45:13 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:13.506615 | orchestrator | 2025-06-03 15:45:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:16.551298 | orchestrator | 2025-06-03 15:45:16 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:16.551459 | orchestrator | 2025-06-03 15:45:16 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:16.551630 | orchestrator | 2025-06-03 15:45:16 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:16.552662 | orchestrator | 2025-06-03 15:45:16 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:16.553787 | orchestrator | 2025-06-03 15:45:16 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:16.555021 | orchestrator | 2025-06-03 15:45:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:19.577519 | orchestrator | 2025-06-03 15:45:19 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:19.578171 | orchestrator | 2025-06-03 15:45:19 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:19.578947 | orchestrator | 2025-06-03 15:45:19 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:19.580117 | orchestrator | 2025-06-03 15:45:19 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:19.581151 | orchestrator | 2025-06-03 15:45:19 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:19.581188 | orchestrator | 2025-06-03 15:45:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:22.610777 | orchestrator | 2025-06-03 15:45:22 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:22.613212 | orchestrator | 2025-06-03 15:45:22 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:22.615265 | orchestrator | 2025-06-03 15:45:22 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:22.615836 | orchestrator | 2025-06-03 15:45:22 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:22.616787 | orchestrator | 2025-06-03 15:45:22 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:22.617591 | orchestrator | 2025-06-03 15:45:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:25.645068 | orchestrator | 2025-06-03 15:45:25 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:25.645443 | orchestrator | 2025-06-03 15:45:25 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:25.646184 | orchestrator | 2025-06-03 15:45:25 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:25.646779 | orchestrator | 2025-06-03 15:45:25 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:25.647438 | orchestrator | 2025-06-03 15:45:25 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:25.647458 | orchestrator | 2025-06-03 15:45:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:28.667966 | orchestrator | 2025-06-03 15:45:28 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:28.669490 | orchestrator | 2025-06-03 15:45:28 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:28.669922 | orchestrator | 2025-06-03 15:45:28 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:28.670859 | orchestrator | 2025-06-03 15:45:28 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:28.671214 | orchestrator | 2025-06-03 15:45:28 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:28.671369 | orchestrator | 2025-06-03 15:45:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:31.694519 | orchestrator | 2025-06-03 15:45:31 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:31.694614 | orchestrator | 2025-06-03 15:45:31 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:31.696413 | orchestrator | 2025-06-03 15:45:31 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:31.696759 | orchestrator | 2025-06-03 15:45:31 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:31.697586 | orchestrator | 2025-06-03 15:45:31 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:31.697630 | orchestrator | 2025-06-03 15:45:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:34.754469 | orchestrator | 2025-06-03 15:45:34 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:34.754650 | orchestrator | 2025-06-03 15:45:34 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:34.755876 | orchestrator | 2025-06-03 15:45:34 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:34.756863 | orchestrator | 2025-06-03 15:45:34 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:34.757739 | orchestrator | 2025-06-03 15:45:34 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state STARTED 2025-06-03 15:45:34.757779 | orchestrator | 2025-06-03 15:45:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:37.790098 | orchestrator | 2025-06-03 15:45:37 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:37.796613 | orchestrator | 2025-06-03 15:45:37 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:37.796799 | orchestrator | 2025-06-03 15:45:37 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:37.796831 | orchestrator | 2025-06-03 15:45:37 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:37.797463 | orchestrator | 2025-06-03 15:45:37 | INFO  | Task 30f82f45-8a76-4cf0-86de-24fc426b4d3c is in state SUCCESS 2025-06-03 15:45:37.798200 | orchestrator | 2025-06-03 15:45:37.798266 | orchestrator | 2025-06-03 15:45:37.798339 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:45:37.798387 | orchestrator | 2025-06-03 15:45:37.798397 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:45:37.798406 | orchestrator | Tuesday 03 June 2025 15:44:16 +0000 (0:00:00.283) 0:00:00.283 ********** 2025-06-03 15:45:37.798414 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:45:37.798423 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:45:37.798431 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:45:37.798440 | orchestrator | ok: [testbed-manager] 2025-06-03 15:45:37.798447 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:45:37.798455 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:45:37.798463 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:45:37.798470 | orchestrator | 2025-06-03 15:45:37.798478 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:45:37.798487 | orchestrator | Tuesday 03 June 2025 15:44:17 +0000 (0:00:00.874) 0:00:01.158 ********** 2025-06-03 15:45:37.798495 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-03 15:45:37.798503 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-03 15:45:37.798511 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-03 15:45:37.798520 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-03 15:45:37.798527 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-03 15:45:37.798535 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-03 15:45:37.798543 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-03 15:45:37.798551 | orchestrator | 2025-06-03 15:45:37.798559 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-03 15:45:37.798567 | orchestrator | 2025-06-03 15:45:37.798575 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-03 15:45:37.798583 | orchestrator | Tuesday 03 June 2025 15:44:18 +0000 (0:00:00.704) 0:00:01.863 ********** 2025-06-03 15:45:37.798592 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:45:37.798601 | orchestrator | 2025-06-03 15:45:37.798609 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-03 15:45:37.798617 | orchestrator | Tuesday 03 June 2025 15:44:21 +0000 (0:00:02.763) 0:00:04.627 ********** 2025-06-03 15:45:37.798625 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-03 15:45:37.798633 | orchestrator | 2025-06-03 15:45:37.798641 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-03 15:45:37.798649 | orchestrator | Tuesday 03 June 2025 15:44:24 +0000 (0:00:03.930) 0:00:08.558 ********** 2025-06-03 15:45:37.798658 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-03 15:45:37.798668 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-03 15:45:37.798676 | orchestrator | 2025-06-03 15:45:37.798684 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-03 15:45:37.798692 | orchestrator | Tuesday 03 June 2025 15:44:32 +0000 (0:00:07.436) 0:00:15.994 ********** 2025-06-03 15:45:37.798700 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:45:37.798708 | orchestrator | 2025-06-03 15:45:37.798716 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-03 15:45:37.798724 | orchestrator | Tuesday 03 June 2025 15:44:35 +0000 (0:00:03.575) 0:00:19.570 ********** 2025-06-03 15:45:37.798732 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:45:37.798740 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-03 15:45:37.798749 | orchestrator | 2025-06-03 15:45:37.798764 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-03 15:45:37.798777 | orchestrator | Tuesday 03 June 2025 15:44:40 +0000 (0:00:04.596) 0:00:24.167 ********** 2025-06-03 15:45:37.798798 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:45:37.798813 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-03 15:45:37.798827 | orchestrator | 2025-06-03 15:45:37.798840 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-03 15:45:37.798852 | orchestrator | Tuesday 03 June 2025 15:44:47 +0000 (0:00:06.917) 0:00:31.084 ********** 2025-06-03 15:45:37.798865 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-03 15:45:37.798879 | orchestrator | 2025-06-03 15:45:37.798892 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:45:37.798921 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:45:37.798937 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:45:37.798952 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:45:37.798968 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:45:37.798984 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:45:37.799017 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:45:37.799028 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:45:37.799038 | orchestrator | 2025-06-03 15:45:37.799047 | orchestrator | 2025-06-03 15:45:37.799057 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:45:37.799067 | orchestrator | Tuesday 03 June 2025 15:44:52 +0000 (0:00:05.210) 0:00:36.294 ********** 2025-06-03 15:45:37.799076 | orchestrator | =============================================================================== 2025-06-03 15:45:37.799085 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.44s 2025-06-03 15:45:37.799093 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.92s 2025-06-03 15:45:37.799101 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.21s 2025-06-03 15:45:37.799109 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.60s 2025-06-03 15:45:37.799117 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.93s 2025-06-03 15:45:37.799125 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.58s 2025-06-03 15:45:37.799133 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.76s 2025-06-03 15:45:37.799141 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2025-06-03 15:45:37.799149 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-06-03 15:45:37.799157 | orchestrator | 2025-06-03 15:45:37.799165 | orchestrator | 2025-06-03 15:45:37.799173 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-03 15:45:37.799181 | orchestrator | 2025-06-03 15:45:37.799189 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-03 15:45:37.799197 | orchestrator | Tuesday 03 June 2025 15:44:09 +0000 (0:00:00.277) 0:00:00.277 ********** 2025-06-03 15:45:37.799205 | orchestrator | changed: [testbed-manager] 2025-06-03 15:45:37.799213 | orchestrator | 2025-06-03 15:45:37.799221 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-03 15:45:37.799229 | orchestrator | Tuesday 03 June 2025 15:44:10 +0000 (0:00:01.476) 0:00:01.753 ********** 2025-06-03 15:45:37.799238 | orchestrator | changed: [testbed-manager] 2025-06-03 15:45:37.799246 | orchestrator | 2025-06-03 15:45:37.799254 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-03 15:45:37.799271 | orchestrator | Tuesday 03 June 2025 15:44:11 +0000 (0:00:01.048) 0:00:02.802 ********** 2025-06-03 15:45:37.799305 | orchestrator | changed: [testbed-manager] 2025-06-03 15:45:37.799314 | orchestrator | 2025-06-03 15:45:37.799322 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-03 15:45:37.799330 | orchestrator | Tuesday 03 June 2025 15:44:12 +0000 (0:00:00.985) 0:00:03.787 ********** 2025-06-03 15:45:37.799338 | orchestrator | changed: [testbed-manager] 2025-06-03 15:45:37.799346 | orchestrator | 2025-06-03 15:45:37.799354 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-03 15:45:37.799362 | orchestrator | Tuesday 03 June 2025 15:44:13 +0000 (0:00:01.169) 0:00:04.956 ********** 2025-06-03 15:45:37.799370 | orchestrator | changed: [testbed-manager] 2025-06-03 15:45:37.799378 | orchestrator | 2025-06-03 15:45:37.799386 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-03 15:45:37.799394 | orchestrator | Tuesday 03 June 2025 15:44:15 +0000 (0:00:01.160) 0:00:06.117 ********** 2025-06-03 15:45:37.799402 | orchestrator | changed: [testbed-manager] 2025-06-03 15:45:37.799411 | orchestrator | 2025-06-03 15:45:37.799419 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-03 15:45:37.799427 | orchestrator | Tuesday 03 June 2025 15:44:16 +0000 (0:00:01.159) 0:00:07.276 ********** 2025-06-03 15:45:37.799435 | orchestrator | changed: [testbed-manager] 2025-06-03 15:45:37.799443 | orchestrator | 2025-06-03 15:45:37.799450 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-03 15:45:37.799458 | orchestrator | Tuesday 03 June 2025 15:44:18 +0000 (0:00:02.309) 0:00:09.586 ********** 2025-06-03 15:45:37.799467 | orchestrator | changed: [testbed-manager] 2025-06-03 15:45:37.799475 | orchestrator | 2025-06-03 15:45:37.799483 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-03 15:45:37.799492 | orchestrator | Tuesday 03 June 2025 15:44:19 +0000 (0:00:01.254) 0:00:10.841 ********** 2025-06-03 15:45:37.799500 | orchestrator | changed: [testbed-manager] 2025-06-03 15:45:37.799508 | orchestrator | 2025-06-03 15:45:37.799515 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-03 15:45:37.799523 | orchestrator | Tuesday 03 June 2025 15:45:12 +0000 (0:00:52.972) 0:01:03.813 ********** 2025-06-03 15:45:37.799531 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:45:37.799539 | orchestrator | 2025-06-03 15:45:37.799547 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-03 15:45:37.799555 | orchestrator | 2025-06-03 15:45:37.799567 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-03 15:45:37.799576 | orchestrator | Tuesday 03 June 2025 15:45:12 +0000 (0:00:00.162) 0:01:03.976 ********** 2025-06-03 15:45:37.799584 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:45:37.799592 | orchestrator | 2025-06-03 15:45:37.799600 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-03 15:45:37.799608 | orchestrator | 2025-06-03 15:45:37.799617 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-03 15:45:37.799625 | orchestrator | Tuesday 03 June 2025 15:45:24 +0000 (0:00:11.695) 0:01:15.671 ********** 2025-06-03 15:45:37.799633 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:45:37.799641 | orchestrator | 2025-06-03 15:45:37.799649 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-03 15:45:37.799657 | orchestrator | 2025-06-03 15:45:37.799665 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-03 15:45:37.799673 | orchestrator | Tuesday 03 June 2025 15:45:25 +0000 (0:00:01.083) 0:01:16.755 ********** 2025-06-03 15:45:37.799682 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:45:37.799690 | orchestrator | 2025-06-03 15:45:37.799704 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:45:37.799713 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:45:37.799727 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:45:37.799736 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:45:37.799744 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:45:37.799752 | orchestrator | 2025-06-03 15:45:37.799760 | orchestrator | 2025-06-03 15:45:37.799768 | orchestrator | 2025-06-03 15:45:37.799776 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:45:37.799785 | orchestrator | Tuesday 03 June 2025 15:45:36 +0000 (0:00:11.060) 0:01:27.816 ********** 2025-06-03 15:45:37.799793 | orchestrator | =============================================================================== 2025-06-03 15:45:37.799801 | orchestrator | Create admin user ------------------------------------------------------ 52.97s 2025-06-03 15:45:37.799810 | orchestrator | Restart ceph manager service ------------------------------------------- 23.84s 2025-06-03 15:45:37.799818 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.31s 2025-06-03 15:45:37.799826 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.48s 2025-06-03 15:45:37.799834 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.25s 2025-06-03 15:45:37.799842 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.17s 2025-06-03 15:45:37.799850 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.16s 2025-06-03 15:45:37.799858 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.16s 2025-06-03 15:45:37.799866 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.05s 2025-06-03 15:45:37.799874 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.99s 2025-06-03 15:45:37.799882 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2025-06-03 15:45:37.799891 | orchestrator | 2025-06-03 15:45:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:40.831438 | orchestrator | 2025-06-03 15:45:40 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:40.833023 | orchestrator | 2025-06-03 15:45:40 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:40.833634 | orchestrator | 2025-06-03 15:45:40 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:40.834700 | orchestrator | 2025-06-03 15:45:40 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:40.834753 | orchestrator | 2025-06-03 15:45:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:43.867046 | orchestrator | 2025-06-03 15:45:43 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:43.868442 | orchestrator | 2025-06-03 15:45:43 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:43.869128 | orchestrator | 2025-06-03 15:45:43 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:43.870238 | orchestrator | 2025-06-03 15:45:43 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:43.870425 | orchestrator | 2025-06-03 15:45:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:46.911817 | orchestrator | 2025-06-03 15:45:46 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:46.917363 | orchestrator | 2025-06-03 15:45:46 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:46.919918 | orchestrator | 2025-06-03 15:45:46 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:46.923782 | orchestrator | 2025-06-03 15:45:46 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:46.923845 | orchestrator | 2025-06-03 15:45:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:49.958935 | orchestrator | 2025-06-03 15:45:49 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:49.959548 | orchestrator | 2025-06-03 15:45:49 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:49.960586 | orchestrator | 2025-06-03 15:45:49 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:49.961765 | orchestrator | 2025-06-03 15:45:49 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:49.961855 | orchestrator | 2025-06-03 15:45:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:52.993680 | orchestrator | 2025-06-03 15:45:52 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:52.995791 | orchestrator | 2025-06-03 15:45:52 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:52.996986 | orchestrator | 2025-06-03 15:45:52 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:52.998690 | orchestrator | 2025-06-03 15:45:52 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:52.998867 | orchestrator | 2025-06-03 15:45:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:56.057898 | orchestrator | 2025-06-03 15:45:56 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:56.059553 | orchestrator | 2025-06-03 15:45:56 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:56.060405 | orchestrator | 2025-06-03 15:45:56 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:56.062310 | orchestrator | 2025-06-03 15:45:56 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:56.062359 | orchestrator | 2025-06-03 15:45:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:59.106920 | orchestrator | 2025-06-03 15:45:59 | INFO  | Task e5bd73f7-1557-4222-8503-7435302cc104 is in state STARTED 2025-06-03 15:45:59.115128 | orchestrator | 2025-06-03 15:45:59 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:45:59.116131 | orchestrator | 2025-06-03 15:45:59 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:45:59.118215 | orchestrator | 2025-06-03 15:45:59 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:45:59.119101 | orchestrator | 2025-06-03 15:45:59 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:45:59.120561 | orchestrator | 2025-06-03 15:45:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:02.187748 | orchestrator | 2025-06-03 15:46:02 | INFO  | Task e5bd73f7-1557-4222-8503-7435302cc104 is in state STARTED 2025-06-03 15:46:02.191100 | orchestrator | 2025-06-03 15:46:02 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:02.191436 | orchestrator | 2025-06-03 15:46:02 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:02.195086 | orchestrator | 2025-06-03 15:46:02 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:02.195153 | orchestrator | 2025-06-03 15:46:02 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:02.196454 | orchestrator | 2025-06-03 15:46:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:05.228234 | orchestrator | 2025-06-03 15:46:05 | INFO  | Task e5bd73f7-1557-4222-8503-7435302cc104 is in state STARTED 2025-06-03 15:46:05.228452 | orchestrator | 2025-06-03 15:46:05 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:05.229379 | orchestrator | 2025-06-03 15:46:05 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:05.238942 | orchestrator | 2025-06-03 15:46:05 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:05.239562 | orchestrator | 2025-06-03 15:46:05 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:05.239648 | orchestrator | 2025-06-03 15:46:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:08.264754 | orchestrator | 2025-06-03 15:46:08 | INFO  | Task e5bd73f7-1557-4222-8503-7435302cc104 is in state STARTED 2025-06-03 15:46:08.265396 | orchestrator | 2025-06-03 15:46:08 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:08.265869 | orchestrator | 2025-06-03 15:46:08 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:08.266571 | orchestrator | 2025-06-03 15:46:08 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:08.267603 | orchestrator | 2025-06-03 15:46:08 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:08.267644 | orchestrator | 2025-06-03 15:46:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:11.292779 | orchestrator | 2025-06-03 15:46:11 | INFO  | Task e5bd73f7-1557-4222-8503-7435302cc104 is in state STARTED 2025-06-03 15:46:11.293183 | orchestrator | 2025-06-03 15:46:11 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:11.294334 | orchestrator | 2025-06-03 15:46:11 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:11.295015 | orchestrator | 2025-06-03 15:46:11 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:11.295821 | orchestrator | 2025-06-03 15:46:11 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:11.295892 | orchestrator | 2025-06-03 15:46:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:14.338931 | orchestrator | 2025-06-03 15:46:14 | INFO  | Task e5bd73f7-1557-4222-8503-7435302cc104 is in state STARTED 2025-06-03 15:46:14.339020 | orchestrator | 2025-06-03 15:46:14 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:14.339345 | orchestrator | 2025-06-03 15:46:14 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:14.341930 | orchestrator | 2025-06-03 15:46:14 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:14.341991 | orchestrator | 2025-06-03 15:46:14 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:14.342000 | orchestrator | 2025-06-03 15:46:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:17.393138 | orchestrator | 2025-06-03 15:46:17 | INFO  | Task e5bd73f7-1557-4222-8503-7435302cc104 is in state SUCCESS 2025-06-03 15:46:17.396681 | orchestrator | 2025-06-03 15:46:17 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:17.397931 | orchestrator | 2025-06-03 15:46:17 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:17.402771 | orchestrator | 2025-06-03 15:46:17 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:17.403302 | orchestrator | 2025-06-03 15:46:17 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:17.403328 | orchestrator | 2025-06-03 15:46:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:20.446391 | orchestrator | 2025-06-03 15:46:20 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:20.446511 | orchestrator | 2025-06-03 15:46:20 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:20.446536 | orchestrator | 2025-06-03 15:46:20 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:20.447909 | orchestrator | 2025-06-03 15:46:20 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:20.447950 | orchestrator | 2025-06-03 15:46:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:23.495176 | orchestrator | 2025-06-03 15:46:23 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:23.503362 | orchestrator | 2025-06-03 15:46:23 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:23.508009 | orchestrator | 2025-06-03 15:46:23 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:23.509778 | orchestrator | 2025-06-03 15:46:23 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:23.509817 | orchestrator | 2025-06-03 15:46:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:26.556404 | orchestrator | 2025-06-03 15:46:26 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:26.559734 | orchestrator | 2025-06-03 15:46:26 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:26.563865 | orchestrator | 2025-06-03 15:46:26 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:26.566150 | orchestrator | 2025-06-03 15:46:26 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:26.566209 | orchestrator | 2025-06-03 15:46:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:29.612175 | orchestrator | 2025-06-03 15:46:29 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:29.612796 | orchestrator | 2025-06-03 15:46:29 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:29.614199 | orchestrator | 2025-06-03 15:46:29 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:29.615088 | orchestrator | 2025-06-03 15:46:29 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:29.615400 | orchestrator | 2025-06-03 15:46:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:32.668359 | orchestrator | 2025-06-03 15:46:32 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:32.671564 | orchestrator | 2025-06-03 15:46:32 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:32.672473 | orchestrator | 2025-06-03 15:46:32 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:32.673928 | orchestrator | 2025-06-03 15:46:32 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:32.673983 | orchestrator | 2025-06-03 15:46:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:35.713028 | orchestrator | 2025-06-03 15:46:35 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:35.713264 | orchestrator | 2025-06-03 15:46:35 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:35.714734 | orchestrator | 2025-06-03 15:46:35 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:35.716150 | orchestrator | 2025-06-03 15:46:35 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:35.716195 | orchestrator | 2025-06-03 15:46:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:38.763467 | orchestrator | 2025-06-03 15:46:38 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:38.764495 | orchestrator | 2025-06-03 15:46:38 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:38.767601 | orchestrator | 2025-06-03 15:46:38 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:38.768319 | orchestrator | 2025-06-03 15:46:38 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:38.768351 | orchestrator | 2025-06-03 15:46:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:41.815847 | orchestrator | 2025-06-03 15:46:41 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:41.817445 | orchestrator | 2025-06-03 15:46:41 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:41.819166 | orchestrator | 2025-06-03 15:46:41 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:41.820572 | orchestrator | 2025-06-03 15:46:41 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:41.820601 | orchestrator | 2025-06-03 15:46:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:44.861819 | orchestrator | 2025-06-03 15:46:44 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:44.862118 | orchestrator | 2025-06-03 15:46:44 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:44.865598 | orchestrator | 2025-06-03 15:46:44 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:44.866491 | orchestrator | 2025-06-03 15:46:44 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:44.866594 | orchestrator | 2025-06-03 15:46:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:47.920430 | orchestrator | 2025-06-03 15:46:47 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:47.922440 | orchestrator | 2025-06-03 15:46:47 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:47.924249 | orchestrator | 2025-06-03 15:46:47 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:47.925839 | orchestrator | 2025-06-03 15:46:47 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:47.925877 | orchestrator | 2025-06-03 15:46:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:50.982956 | orchestrator | 2025-06-03 15:46:50 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:50.984048 | orchestrator | 2025-06-03 15:46:50 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:50.986439 | orchestrator | 2025-06-03 15:46:50 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:50.986829 | orchestrator | 2025-06-03 15:46:50 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:50.986879 | orchestrator | 2025-06-03 15:46:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:54.065656 | orchestrator | 2025-06-03 15:46:54 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:54.065729 | orchestrator | 2025-06-03 15:46:54 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:54.065735 | orchestrator | 2025-06-03 15:46:54 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:54.065739 | orchestrator | 2025-06-03 15:46:54 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:54.065744 | orchestrator | 2025-06-03 15:46:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:57.115584 | orchestrator | 2025-06-03 15:46:57 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:46:57.115663 | orchestrator | 2025-06-03 15:46:57 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:46:57.117148 | orchestrator | 2025-06-03 15:46:57 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:46:57.122354 | orchestrator | 2025-06-03 15:46:57 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:46:57.122436 | orchestrator | 2025-06-03 15:46:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:00.153887 | orchestrator | 2025-06-03 15:47:00 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:00.157390 | orchestrator | 2025-06-03 15:47:00 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:00.165771 | orchestrator | 2025-06-03 15:47:00 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:47:00.165904 | orchestrator | 2025-06-03 15:47:00 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:47:00.165930 | orchestrator | 2025-06-03 15:47:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:03.186212 | orchestrator | 2025-06-03 15:47:03 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:03.186721 | orchestrator | 2025-06-03 15:47:03 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:03.187453 | orchestrator | 2025-06-03 15:47:03 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:47:03.193604 | orchestrator | 2025-06-03 15:47:03 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:47:03.193676 | orchestrator | 2025-06-03 15:47:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:06.229944 | orchestrator | 2025-06-03 15:47:06 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:06.231866 | orchestrator | 2025-06-03 15:47:06 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:06.232255 | orchestrator | 2025-06-03 15:47:06 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:47:06.233980 | orchestrator | 2025-06-03 15:47:06 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:47:06.234295 | orchestrator | 2025-06-03 15:47:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:09.278389 | orchestrator | 2025-06-03 15:47:09 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:09.278498 | orchestrator | 2025-06-03 15:47:09 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:09.281871 | orchestrator | 2025-06-03 15:47:09 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:47:09.281926 | orchestrator | 2025-06-03 15:47:09 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:47:09.281939 | orchestrator | 2025-06-03 15:47:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:12.322111 | orchestrator | 2025-06-03 15:47:12 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:12.322524 | orchestrator | 2025-06-03 15:47:12 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:12.323042 | orchestrator | 2025-06-03 15:47:12 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:47:12.324408 | orchestrator | 2025-06-03 15:47:12 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:47:12.324454 | orchestrator | 2025-06-03 15:47:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:15.369362 | orchestrator | 2025-06-03 15:47:15 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:15.369428 | orchestrator | 2025-06-03 15:47:15 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:15.370086 | orchestrator | 2025-06-03 15:47:15 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:47:15.370727 | orchestrator | 2025-06-03 15:47:15 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:47:15.370791 | orchestrator | 2025-06-03 15:47:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:18.420602 | orchestrator | 2025-06-03 15:47:18 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:18.424907 | orchestrator | 2025-06-03 15:47:18 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:18.428670 | orchestrator | 2025-06-03 15:47:18 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state STARTED 2025-06-03 15:47:18.431132 | orchestrator | 2025-06-03 15:47:18 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:47:18.431528 | orchestrator | 2025-06-03 15:47:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:21.481309 | orchestrator | 2025-06-03 15:47:21 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:21.481388 | orchestrator | 2025-06-03 15:47:21 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:21.483058 | orchestrator | 2025-06-03 15:47:21 | INFO  | Task 4c569064-c732-456b-ba83-73abc6c144f6 is in state SUCCESS 2025-06-03 15:47:21.486142 | orchestrator | 2025-06-03 15:47:21.486215 | orchestrator | None 2025-06-03 15:47:21.487421 | orchestrator | 2025-06-03 15:47:21.487455 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:47:21.487461 | orchestrator | 2025-06-03 15:47:21.487465 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:47:21.487470 | orchestrator | Tuesday 03 June 2025 15:44:09 +0000 (0:00:00.345) 0:00:00.345 ********** 2025-06-03 15:47:21.487474 | orchestrator | ok: [testbed-manager] 2025-06-03 15:47:21.487479 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:47:21.487483 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:47:21.487487 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:47:21.487491 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:47:21.487495 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:47:21.487499 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:47:21.487503 | orchestrator | 2025-06-03 15:47:21.487507 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:47:21.487528 | orchestrator | Tuesday 03 June 2025 15:44:10 +0000 (0:00:00.942) 0:00:01.287 ********** 2025-06-03 15:47:21.487533 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-03 15:47:21.487537 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-03 15:47:21.487541 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-03 15:47:21.487545 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-03 15:47:21.487549 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-03 15:47:21.487552 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-03 15:47:21.487556 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-03 15:47:21.487560 | orchestrator | 2025-06-03 15:47:21.487563 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-03 15:47:21.487567 | orchestrator | 2025-06-03 15:47:21.487571 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-03 15:47:21.487575 | orchestrator | Tuesday 03 June 2025 15:44:10 +0000 (0:00:00.828) 0:00:02.116 ********** 2025-06-03 15:47:21.487579 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:47:21.487585 | orchestrator | 2025-06-03 15:47:21.487589 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-03 15:47:21.487593 | orchestrator | Tuesday 03 June 2025 15:44:12 +0000 (0:00:01.603) 0:00:03.720 ********** 2025-06-03 15:47:21.487615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487633 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:47:21.487642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487674 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487700 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.487708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:47:21.487756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.487761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.487809 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.487822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.487826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.487830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487837 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.487845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.487853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.487864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.487869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487877 | orchestrator | 2025-06-03 15:47:21.487880 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-03 15:47:21.487885 | orchestrator | Tuesday 03 June 2025 15:44:16 +0000 (0:00:03.926) 0:00:07.647 ********** 2025-06-03 15:47:21.487891 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:47:21.487896 | orchestrator | 2025-06-03 15:47:21.487900 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-03 15:47:21.487903 | orchestrator | Tuesday 03 June 2025 15:44:17 +0000 (0:00:01.474) 0:00:09.122 ********** 2025-06-03 15:47:21.487908 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:47:21.487912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487988 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.487993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.487997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.488004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.488012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.488016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.488020 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.488028 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.488033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.488040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.488055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.488062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.488073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.488081 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:47:21.488092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.488099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.488110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.488118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.488355 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.488373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.488379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.488392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.488398 | orchestrator | 2025-06-03 15:47:21.488405 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-03 15:47:21.488412 | orchestrator | Tuesday 03 June 2025 15:44:24 +0000 (0:00:06.441) 0:00:15.563 ********** 2025-06-03 15:47:21.488419 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-03 15:47:21.488433 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488441 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488455 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-03 15:47:21.488462 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488468 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:47:21.488477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488591 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.488597 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.488603 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.488610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488638 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.488644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488665 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.488672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488699 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.488705 | orchestrator | 2025-06-03 15:47:21.488711 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-03 15:47:21.488718 | orchestrator | Tuesday 03 June 2025 15:44:26 +0000 (0:00:01.780) 0:00:17.344 ********** 2025-06-03 15:47:21.488724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488760 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-03 15:47:21.488774 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488781 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488788 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-03 15:47:21.488799 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488848 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.488854 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:47:21.488860 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.488867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:47:21.488908 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.488918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.488924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.488937 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.488943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.489516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.489543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.489560 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.489567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:47:21.489580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.489586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:47:21.489592 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.489598 | orchestrator | 2025-06-03 15:47:21.489605 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-03 15:47:21.489611 | orchestrator | Tuesday 03 June 2025 15:44:28 +0000 (0:00:01.954) 0:00:19.298 ********** 2025-06-03 15:47:21.489618 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:47:21.489623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.489632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.489642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.489647 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.489654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.489658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.489662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.489694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.489699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.489709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.489714 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.489718 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.489725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.489729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.489733 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.489738 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:47:21.489748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.489753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.489757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.489763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.489788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.489792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.489796 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.489806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.489810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.489814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.489818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.489824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.489829 | orchestrator | 2025-06-03 15:47:21.489832 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-03 15:47:21.489836 | orchestrator | Tuesday 03 June 2025 15:44:34 +0000 (0:00:06.226) 0:00:25.525 ********** 2025-06-03 15:47:21.489840 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:47:21.489844 | orchestrator | 2025-06-03 15:47:21.489848 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-03 15:47:21.489852 | orchestrator | Tuesday 03 June 2025 15:44:35 +0000 (0:00:00.821) 0:00:26.347 ********** 2025-06-03 15:47:21.489856 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096211, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489863 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096199, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7106526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489871 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096211, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489875 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096211, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489879 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096211, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.489888 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096178, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489892 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096211, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489896 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096199, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7106526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489903 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096211, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489911 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096211, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489915 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096199, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7106526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489919 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096180, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489926 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096199, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7106526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489930 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096199, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7106526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489934 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096178, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489941 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096178, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489948 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096199, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7106526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489952 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096195, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489956 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096180, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489962 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096199, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7106526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.489967 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096178, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489974 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096180, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489978 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096187, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7056525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489985 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096178, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489989 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096195, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.489993 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096178, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490000 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096195, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490004 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096194, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490011 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096180, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490040 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096180, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490048 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096180, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490053 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096187, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7056525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490057 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096178, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.490064 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096187, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7056525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490068 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096202, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7116525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490126 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096195, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490132 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096195, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490139 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096195, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490144 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096194, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490148 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096209, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490156 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096187, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7056525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490185 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096202, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7116525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490190 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096187, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7056525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490194 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096194, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490201 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096180, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.490205 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096226, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490209 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096209, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490217 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096187, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7056525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490224 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096194, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490229 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096202, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7116525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490233 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096194, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490237 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096226, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490244 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096204, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7126527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490249 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096194, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490256 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096195, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.490265 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096209, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490269 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096204, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7126527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490274 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096202, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7116525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490278 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096202, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7116525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490285 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096184, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490289 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096202, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7116525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490294 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096209, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490304 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096184, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490308 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096226, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490313 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096209, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490317 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096209, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490324 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096193, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7076526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490329 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096193, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7076526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490338 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096226, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490345 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096226, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490350 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096226, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490354 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096204, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7126527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490358 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096187, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7056525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.490365 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096175, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7026525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490370 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096175, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7026525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490378 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096204, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7126527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490382 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096204, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7126527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490387 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096204, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7126527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490391 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096197, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7096527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490395 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096184, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490741 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096224, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490837 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096197, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7096527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490875 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096184, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490906 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096184, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490927 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096191, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7066526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490947 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096193, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7076526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.490966 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096184, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491007 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096214, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7156527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491039 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.491062 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096194, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7086525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491083 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096175, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7026525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491110 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096193, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7076526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491130 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096224, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491197 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096193, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7076526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491217 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096193, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7076526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491254 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096175, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7026525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491286 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096197, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7096527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491307 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096191, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7066526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491334 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096175, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7026525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491350 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096202, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7116525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491362 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096224, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491373 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096175, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7026525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491393 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096214, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7156527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491413 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.491425 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096197, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7096527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491440 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096197, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7096527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491470 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096224, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491498 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096191, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7066526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491517 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096224, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491534 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096191, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7066526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491573 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096209, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7146528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491593 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096197, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7096527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491610 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096214, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7156527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491627 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.491653 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096214, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7156527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491672 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.491691 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096191, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7066526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491709 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096214, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7156527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491728 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.491749 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096224, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491791 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096191, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7066526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491812 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096226, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491840 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096214, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7156527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:47:21.491859 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.491875 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096204, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7126527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491888 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096184, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7036526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491899 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096193, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7076526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491919 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096175, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7026525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491938 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096197, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7096527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491950 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096224, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7196527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491972 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096191, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7066526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491985 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096214, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7156527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:47:21.491997 | orchestrator | 2025-06-03 15:47:21.492008 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-03 15:47:21.492043 | orchestrator | Tuesday 03 June 2025 15:44:59 +0000 (0:00:24.241) 0:00:50.588 ********** 2025-06-03 15:47:21.492056 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:47:21.492080 | orchestrator | 2025-06-03 15:47:21.492091 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-03 15:47:21.492104 | orchestrator | Tuesday 03 June 2025 15:45:00 +0000 (0:00:00.796) 0:00:51.385 ********** 2025-06-03 15:47:21.492115 | orchestrator | [WARNING]: Skipped 2025-06-03 15:47:21.492129 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492141 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-03 15:47:21.492181 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492218 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-03 15:47:21.492237 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:47:21.492256 | orchestrator | [WARNING]: Skipped 2025-06-03 15:47:21.492274 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492295 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-03 15:47:21.492316 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492335 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-03 15:47:21.492349 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:47:21.492361 | orchestrator | [WARNING]: Skipped 2025-06-03 15:47:21.492373 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492384 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-03 15:47:21.492395 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492407 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-03 15:47:21.492418 | orchestrator | [WARNING]: Skipped 2025-06-03 15:47:21.492431 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492441 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-03 15:47:21.492452 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492464 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-03 15:47:21.492476 | orchestrator | [WARNING]: Skipped 2025-06-03 15:47:21.492497 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492508 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-03 15:47:21.492519 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492530 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-03 15:47:21.492541 | orchestrator | [WARNING]: Skipped 2025-06-03 15:47:21.492553 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492564 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-03 15:47:21.492576 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492588 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-03 15:47:21.492599 | orchestrator | [WARNING]: Skipped 2025-06-03 15:47:21.492611 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492622 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-03 15:47:21.492634 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:47:21.492645 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-03 15:47:21.492657 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-03 15:47:21.492668 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-03 15:47:21.492683 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-03 15:47:21.492702 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:47:21.492722 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-03 15:47:21.492741 | orchestrator | 2025-06-03 15:47:21.492759 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-03 15:47:21.492776 | orchestrator | Tuesday 03 June 2025 15:45:02 +0000 (0:00:02.784) 0:00:54.169 ********** 2025-06-03 15:47:21.492794 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:47:21.492812 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.492830 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:47:21.492850 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.492871 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:47:21.492914 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.492927 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:47:21.492939 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.492950 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:47:21.492961 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.492973 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:47:21.492985 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.492996 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-03 15:47:21.493007 | orchestrator | 2025-06-03 15:47:21.493018 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-03 15:47:21.493029 | orchestrator | Tuesday 03 June 2025 15:45:21 +0000 (0:00:18.514) 0:01:12.684 ********** 2025-06-03 15:47:21.493041 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:47:21.493052 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.493063 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:47:21.493075 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.493085 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:47:21.493097 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:47:21.493107 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.493118 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.493129 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:47:21.493140 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.493150 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:47:21.493184 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.493197 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-03 15:47:21.493208 | orchestrator | 2025-06-03 15:47:21.493219 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-03 15:47:21.493230 | orchestrator | Tuesday 03 June 2025 15:45:24 +0000 (0:00:03.439) 0:01:16.123 ********** 2025-06-03 15:47:21.493243 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:47:21.493256 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.493267 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-03 15:47:21.493278 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:47:21.493290 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.493312 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:47:21.493324 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.493336 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:47:21.493347 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.493358 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:47:21.493370 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.493391 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:47:21.493402 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.493413 | orchestrator | 2025-06-03 15:47:21.493424 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-03 15:47:21.493435 | orchestrator | Tuesday 03 June 2025 15:45:27 +0000 (0:00:02.256) 0:01:18.380 ********** 2025-06-03 15:47:21.493446 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:47:21.493457 | orchestrator | 2025-06-03 15:47:21.493468 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-03 15:47:21.493479 | orchestrator | Tuesday 03 June 2025 15:45:27 +0000 (0:00:00.550) 0:01:18.931 ********** 2025-06-03 15:47:21.493490 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:47:21.493501 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.493511 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.493522 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.493533 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.493544 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.493555 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.493566 | orchestrator | 2025-06-03 15:47:21.493577 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-03 15:47:21.493588 | orchestrator | Tuesday 03 June 2025 15:45:28 +0000 (0:00:00.674) 0:01:19.606 ********** 2025-06-03 15:47:21.493601 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:47:21.493612 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.493623 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.493634 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.493645 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:21.493664 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:21.493675 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:21.493686 | orchestrator | 2025-06-03 15:47:21.493698 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-03 15:47:21.493710 | orchestrator | Tuesday 03 June 2025 15:45:30 +0000 (0:00:02.294) 0:01:21.901 ********** 2025-06-03 15:47:21.493721 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:47:21.493732 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:47:21.493743 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:47:21.493754 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:47:21.493765 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.493776 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.493787 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:47:21.493799 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:47:21.493810 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:47:21.493821 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.493832 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.493843 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.493854 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:47:21.493865 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.493876 | orchestrator | 2025-06-03 15:47:21.493887 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-03 15:47:21.493899 | orchestrator | Tuesday 03 June 2025 15:45:32 +0000 (0:00:02.193) 0:01:24.094 ********** 2025-06-03 15:47:21.493910 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:47:21.493921 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.493942 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:47:21.493954 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.493965 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:47:21.493976 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.493987 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:47:21.493999 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.494010 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:47:21.494097 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.494111 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:47:21.494122 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.494142 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-03 15:47:21.494154 | orchestrator | 2025-06-03 15:47:21.494198 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-03 15:47:21.494210 | orchestrator | Tuesday 03 June 2025 15:45:34 +0000 (0:00:01.967) 0:01:26.062 ********** 2025-06-03 15:47:21.494221 | orchestrator | [WARNING]: Skipped 2025-06-03 15:47:21.494233 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-03 15:47:21.494244 | orchestrator | due to this access issue: 2025-06-03 15:47:21.494256 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-03 15:47:21.494266 | orchestrator | not a directory 2025-06-03 15:47:21.494277 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:47:21.494289 | orchestrator | 2025-06-03 15:47:21.494299 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-03 15:47:21.494311 | orchestrator | Tuesday 03 June 2025 15:45:36 +0000 (0:00:01.302) 0:01:27.364 ********** 2025-06-03 15:47:21.494323 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:47:21.494334 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.494345 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.494356 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.494366 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.494378 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.494390 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.494401 | orchestrator | 2025-06-03 15:47:21.494412 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-03 15:47:21.494424 | orchestrator | Tuesday 03 June 2025 15:45:37 +0000 (0:00:00.917) 0:01:28.282 ********** 2025-06-03 15:47:21.494435 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:47:21.494446 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:21.494457 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:21.494468 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:21.494478 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:47:21.494489 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:47:21.494500 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:47:21.494511 | orchestrator | 2025-06-03 15:47:21.494522 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-03 15:47:21.494533 | orchestrator | Tuesday 03 June 2025 15:45:38 +0000 (0:00:01.408) 0:01:29.690 ********** 2025-06-03 15:47:21.494553 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:47:21.494578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.494591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.494603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.494624 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.494637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.494648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.494665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:47:21.494689 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.494704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.494717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.494730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.494748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.494761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.494781 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:47:21.494803 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.494816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.494828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.494847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.494860 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.494872 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.494884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.494908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.494921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.494933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.494945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:47:21.494964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.494977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.494989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:47:21.495013 | orchestrator | 2025-06-03 15:47:21.495035 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-03 15:47:21.495056 | orchestrator | Tuesday 03 June 2025 15:45:43 +0000 (0:00:04.794) 0:01:34.485 ********** 2025-06-03 15:47:21.495084 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-03 15:47:21.495106 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:47:21.495126 | orchestrator | 2025-06-03 15:47:21.495146 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:47:21.495198 | orchestrator | Tuesday 03 June 2025 15:45:44 +0000 (0:00:01.430) 0:01:35.916 ********** 2025-06-03 15:47:21.495218 | orchestrator | 2025-06-03 15:47:21.495236 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:47:21.495254 | orchestrator | Tuesday 03 June 2025 15:45:45 +0000 (0:00:00.814) 0:01:36.730 ********** 2025-06-03 15:47:21.495273 | orchestrator | 2025-06-03 15:47:21.495293 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:47:21.495312 | orchestrator | Tuesday 03 June 2025 15:45:45 +0000 (0:00:00.213) 0:01:36.944 ********** 2025-06-03 15:47:21.495331 | orchestrator | 2025-06-03 15:47:21.495349 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:47:21.495368 | orchestrator | Tuesday 03 June 2025 15:45:45 +0000 (0:00:00.185) 0:01:37.130 ********** 2025-06-03 15:47:21.495386 | orchestrator | 2025-06-03 15:47:21.495406 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:47:21.495426 | orchestrator | Tuesday 03 June 2025 15:45:46 +0000 (0:00:00.139) 0:01:37.269 ********** 2025-06-03 15:47:21.495444 | orchestrator | 2025-06-03 15:47:21.495462 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:47:21.495480 | orchestrator | Tuesday 03 June 2025 15:45:46 +0000 (0:00:00.145) 0:01:37.415 ********** 2025-06-03 15:47:21.495499 | orchestrator | 2025-06-03 15:47:21.495518 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:47:21.495537 | orchestrator | Tuesday 03 June 2025 15:45:46 +0000 (0:00:00.105) 0:01:37.520 ********** 2025-06-03 15:47:21.495556 | orchestrator | 2025-06-03 15:47:21.495575 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-03 15:47:21.495594 | orchestrator | Tuesday 03 June 2025 15:45:46 +0000 (0:00:00.152) 0:01:37.673 ********** 2025-06-03 15:47:21.495612 | orchestrator | changed: [testbed-manager] 2025-06-03 15:47:21.495631 | orchestrator | 2025-06-03 15:47:21.495649 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-03 15:47:21.495668 | orchestrator | Tuesday 03 June 2025 15:46:02 +0000 (0:00:16.288) 0:01:53.961 ********** 2025-06-03 15:47:21.495688 | orchestrator | changed: [testbed-manager] 2025-06-03 15:47:21.495706 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:47:21.495726 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:47:21.495744 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:47:21.495763 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:21.495775 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:21.495786 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:21.495796 | orchestrator | 2025-06-03 15:47:21.495807 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-03 15:47:21.495819 | orchestrator | Tuesday 03 June 2025 15:46:19 +0000 (0:00:16.248) 0:02:10.209 ********** 2025-06-03 15:47:21.495830 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:21.495840 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:21.495851 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:21.495862 | orchestrator | 2025-06-03 15:47:21.495872 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-03 15:47:21.495883 | orchestrator | Tuesday 03 June 2025 15:46:29 +0000 (0:00:10.618) 0:02:20.828 ********** 2025-06-03 15:47:21.495894 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:21.495917 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:21.495928 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:21.495939 | orchestrator | 2025-06-03 15:47:21.495949 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-03 15:47:21.495960 | orchestrator | Tuesday 03 June 2025 15:46:35 +0000 (0:00:06.142) 0:02:26.971 ********** 2025-06-03 15:47:21.495971 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:21.495995 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:21.496007 | orchestrator | changed: [testbed-manager] 2025-06-03 15:47:21.496018 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:47:21.496029 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:21.496039 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:47:21.496050 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:47:21.496061 | orchestrator | 2025-06-03 15:47:21.496071 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-03 15:47:21.496085 | orchestrator | Tuesday 03 June 2025 15:46:49 +0000 (0:00:13.915) 0:02:40.887 ********** 2025-06-03 15:47:21.496104 | orchestrator | changed: [testbed-manager] 2025-06-03 15:47:21.496121 | orchestrator | 2025-06-03 15:47:21.496138 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-03 15:47:21.496155 | orchestrator | Tuesday 03 June 2025 15:46:57 +0000 (0:00:07.359) 0:02:48.247 ********** 2025-06-03 15:47:21.496250 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:21.496270 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:21.496289 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:21.496307 | orchestrator | 2025-06-03 15:47:21.496325 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-03 15:47:21.496343 | orchestrator | Tuesday 03 June 2025 15:47:03 +0000 (0:00:06.434) 0:02:54.682 ********** 2025-06-03 15:47:21.496358 | orchestrator | changed: [testbed-manager] 2025-06-03 15:47:21.496375 | orchestrator | 2025-06-03 15:47:21.496394 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-03 15:47:21.496413 | orchestrator | Tuesday 03 June 2025 15:47:09 +0000 (0:00:05.597) 0:03:00.279 ********** 2025-06-03 15:47:21.496432 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:47:21.496450 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:47:21.496465 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:47:21.496476 | orchestrator | 2025-06-03 15:47:21.496487 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:47:21.496499 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-03 15:47:21.496511 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:47:21.496533 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:47:21.496545 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:47:21.496556 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-03 15:47:21.496567 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-03 15:47:21.496577 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-03 15:47:21.496588 | orchestrator | 2025-06-03 15:47:21.496599 | orchestrator | 2025-06-03 15:47:21.496610 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:47:21.496621 | orchestrator | Tuesday 03 June 2025 15:47:20 +0000 (0:00:11.179) 0:03:11.459 ********** 2025-06-03 15:47:21.496642 | orchestrator | =============================================================================== 2025-06-03 15:47:21.496653 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.24s 2025-06-03 15:47:21.496664 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.51s 2025-06-03 15:47:21.496682 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.29s 2025-06-03 15:47:21.496709 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.25s 2025-06-03 15:47:21.496729 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.92s 2025-06-03 15:47:21.496746 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.18s 2025-06-03 15:47:21.496762 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.62s 2025-06-03 15:47:21.496778 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.36s 2025-06-03 15:47:21.496794 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.44s 2025-06-03 15:47:21.496810 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.43s 2025-06-03 15:47:21.496827 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.23s 2025-06-03 15:47:21.496838 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.14s 2025-06-03 15:47:21.496848 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.60s 2025-06-03 15:47:21.496857 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.79s 2025-06-03 15:47:21.496867 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.93s 2025-06-03 15:47:21.496877 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.44s 2025-06-03 15:47:21.496886 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.78s 2025-06-03 15:47:21.496896 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.29s 2025-06-03 15:47:21.496915 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.26s 2025-06-03 15:47:21.496925 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.19s 2025-06-03 15:47:21.496935 | orchestrator | 2025-06-03 15:47:21 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:47:21.496945 | orchestrator | 2025-06-03 15:47:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:24.522431 | orchestrator | 2025-06-03 15:47:24 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:24.523734 | orchestrator | 2025-06-03 15:47:24 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:24.525875 | orchestrator | 2025-06-03 15:47:24 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:47:24.526100 | orchestrator | 2025-06-03 15:47:24 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:24.526116 | orchestrator | 2025-06-03 15:47:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:27.568413 | orchestrator | 2025-06-03 15:47:27 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:27.568489 | orchestrator | 2025-06-03 15:47:27 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:27.570887 | orchestrator | 2025-06-03 15:47:27 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state STARTED 2025-06-03 15:47:27.570908 | orchestrator | 2025-06-03 15:47:27 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:27.570915 | orchestrator | 2025-06-03 15:47:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:30.617040 | orchestrator | 2025-06-03 15:47:30 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:47:30.617213 | orchestrator | 2025-06-03 15:47:30 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:30.618423 | orchestrator | 2025-06-03 15:47:30 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:30.620373 | orchestrator | 2025-06-03 15:47:30 | INFO  | Task 4193cc9d-62b0-4afe-b2c0-6038b16f6835 is in state SUCCESS 2025-06-03 15:47:30.621916 | orchestrator | 2025-06-03 15:47:30.621947 | orchestrator | 2025-06-03 15:47:30.621953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:47:30.621957 | orchestrator | 2025-06-03 15:47:30.621961 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:47:30.621966 | orchestrator | Tuesday 03 June 2025 15:44:16 +0000 (0:00:00.311) 0:00:00.311 ********** 2025-06-03 15:47:30.621970 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:47:30.621974 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:47:30.621978 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:47:30.621982 | orchestrator | 2025-06-03 15:47:30.621986 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:47:30.621990 | orchestrator | Tuesday 03 June 2025 15:44:17 +0000 (0:00:00.348) 0:00:00.659 ********** 2025-06-03 15:47:30.621994 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-03 15:47:30.621998 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-03 15:47:30.622002 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-03 15:47:30.622006 | orchestrator | 2025-06-03 15:47:30.622009 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-03 15:47:30.622063 | orchestrator | 2025-06-03 15:47:30.622067 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-03 15:47:30.622071 | orchestrator | Tuesday 03 June 2025 15:44:17 +0000 (0:00:00.431) 0:00:01.091 ********** 2025-06-03 15:47:30.622074 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:47:30.622079 | orchestrator | 2025-06-03 15:47:30.622083 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-03 15:47:30.622087 | orchestrator | Tuesday 03 June 2025 15:44:18 +0000 (0:00:00.588) 0:00:01.680 ********** 2025-06-03 15:47:30.622092 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-03 15:47:30.622096 | orchestrator | 2025-06-03 15:47:30.622100 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-03 15:47:30.622103 | orchestrator | Tuesday 03 June 2025 15:44:22 +0000 (0:00:04.627) 0:00:06.308 ********** 2025-06-03 15:47:30.622107 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-03 15:47:30.622112 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-03 15:47:30.622116 | orchestrator | 2025-06-03 15:47:30.622119 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-03 15:47:30.622123 | orchestrator | Tuesday 03 June 2025 15:44:30 +0000 (0:00:07.882) 0:00:14.190 ********** 2025-06-03 15:47:30.622128 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-03 15:47:30.622135 | orchestrator | 2025-06-03 15:47:30.622141 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-03 15:47:30.622147 | orchestrator | Tuesday 03 June 2025 15:44:34 +0000 (0:00:03.658) 0:00:17.848 ********** 2025-06-03 15:47:30.622167 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:47:30.622173 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-03 15:47:30.622179 | orchestrator | 2025-06-03 15:47:30.622184 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-03 15:47:30.622190 | orchestrator | Tuesday 03 June 2025 15:44:38 +0000 (0:00:04.153) 0:00:22.002 ********** 2025-06-03 15:47:30.622219 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:47:30.622225 | orchestrator | 2025-06-03 15:47:30.622231 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-03 15:47:30.622236 | orchestrator | Tuesday 03 June 2025 15:44:42 +0000 (0:00:04.039) 0:00:26.042 ********** 2025-06-03 15:47:30.622242 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-03 15:47:30.622248 | orchestrator | 2025-06-03 15:47:30.622254 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-03 15:47:30.622259 | orchestrator | Tuesday 03 June 2025 15:44:47 +0000 (0:00:05.200) 0:00:31.242 ********** 2025-06-03 15:47:30.622295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.622307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.622319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.622323 | orchestrator | 2025-06-03 15:47:30.622327 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-03 15:47:30.622331 | orchestrator | Tuesday 03 June 2025 15:44:51 +0000 (0:00:03.783) 0:00:35.026 ********** 2025-06-03 15:47:30.622338 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:47:30.622342 | orchestrator | 2025-06-03 15:47:30.622346 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-03 15:47:30.622349 | orchestrator | Tuesday 03 June 2025 15:44:52 +0000 (0:00:00.533) 0:00:35.559 ********** 2025-06-03 15:47:30.622353 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:30.622357 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:30.622361 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:30.622364 | orchestrator | 2025-06-03 15:47:30.622368 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-03 15:47:30.622372 | orchestrator | Tuesday 03 June 2025 15:44:56 +0000 (0:00:04.214) 0:00:39.773 ********** 2025-06-03 15:47:30.622376 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:47:30.622380 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:47:30.622383 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:47:30.622387 | orchestrator | 2025-06-03 15:47:30.622391 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-03 15:47:30.622394 | orchestrator | Tuesday 03 June 2025 15:44:57 +0000 (0:00:01.647) 0:00:41.421 ********** 2025-06-03 15:47:30.622398 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:47:30.622426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:47:30.622430 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:47:30.622434 | orchestrator | 2025-06-03 15:47:30.622441 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-03 15:47:30.622445 | orchestrator | Tuesday 03 June 2025 15:44:59 +0000 (0:00:01.232) 0:00:42.654 ********** 2025-06-03 15:47:30.622449 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:47:30.622453 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:47:30.622456 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:47:30.622460 | orchestrator | 2025-06-03 15:47:30.622465 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-03 15:47:30.622471 | orchestrator | Tuesday 03 June 2025 15:45:00 +0000 (0:00:00.969) 0:00:43.623 ********** 2025-06-03 15:47:30.622477 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.622483 | orchestrator | 2025-06-03 15:47:30.622489 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-03 15:47:30.622495 | orchestrator | Tuesday 03 June 2025 15:45:00 +0000 (0:00:00.148) 0:00:43.772 ********** 2025-06-03 15:47:30.622501 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.622506 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.622511 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.622517 | orchestrator | 2025-06-03 15:47:30.622522 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-03 15:47:30.622527 | orchestrator | Tuesday 03 June 2025 15:45:00 +0000 (0:00:00.460) 0:00:44.233 ********** 2025-06-03 15:47:30.622537 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:47:30.622545 | orchestrator | 2025-06-03 15:47:30.622551 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-03 15:47:30.622556 | orchestrator | Tuesday 03 June 2025 15:45:01 +0000 (0:00:00.784) 0:00:45.017 ********** 2025-06-03 15:47:30.622573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.622582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.622594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.622601 | orchestrator | 2025-06-03 15:47:30.622607 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-03 15:47:30.622613 | orchestrator | Tuesday 03 June 2025 15:45:07 +0000 (0:00:05.926) 0:00:50.943 ********** 2025-06-03 15:47:30.622628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:47:30.622640 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.622647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:47:30.622653 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.622669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:47:30.622681 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.622688 | orchestrator | 2025-06-03 15:47:30.622694 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-03 15:47:30.622701 | orchestrator | Tuesday 03 June 2025 15:45:10 +0000 (0:00:02.953) 0:00:53.897 ********** 2025-06-03 15:47:30.622708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:47:30.622715 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.622734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:47:30.622746 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.622753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:47:30.622760 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.622767 | orchestrator | 2025-06-03 15:47:30.622773 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-03 15:47:30.622780 | orchestrator | Tuesday 03 June 2025 15:45:13 +0000 (0:00:03.388) 0:00:57.285 ********** 2025-06-03 15:47:30.622784 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.622789 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.622793 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.622797 | orchestrator | 2025-06-03 15:47:30.622801 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-03 15:47:30.622806 | orchestrator | Tuesday 03 June 2025 15:45:17 +0000 (0:00:03.989) 0:01:01.275 ********** 2025-06-03 15:47:30.622817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.622827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.622832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.622837 | orchestrator | 2025-06-03 15:47:30.622841 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-03 15:47:30.622851 | orchestrator | Tuesday 03 June 2025 15:45:22 +0000 (0:00:04.945) 0:01:06.220 ********** 2025-06-03 15:47:30.622855 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:30.622860 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:30.622864 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:30.622868 | orchestrator | 2025-06-03 15:47:30.622872 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-03 15:47:30.622879 | orchestrator | Tuesday 03 June 2025 15:45:29 +0000 (0:00:07.203) 0:01:13.424 ********** 2025-06-03 15:47:30.622884 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.622888 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.622891 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.622895 | orchestrator | 2025-06-03 15:47:30.622899 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-03 15:47:30.622903 | orchestrator | Tuesday 03 June 2025 15:45:34 +0000 (0:00:04.783) 0:01:18.207 ********** 2025-06-03 15:47:30.622906 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.622910 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.622914 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.622917 | orchestrator | 2025-06-03 15:47:30.622921 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-03 15:47:30.622925 | orchestrator | Tuesday 03 June 2025 15:45:39 +0000 (0:00:04.706) 0:01:22.914 ********** 2025-06-03 15:47:30.622929 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.622932 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.622936 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.622940 | orchestrator | 2025-06-03 15:47:30.622944 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-03 15:47:30.622947 | orchestrator | Tuesday 03 June 2025 15:45:44 +0000 (0:00:04.622) 0:01:27.536 ********** 2025-06-03 15:47:30.622951 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.622955 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.622958 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.622962 | orchestrator | 2025-06-03 15:47:30.622966 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-03 15:47:30.622969 | orchestrator | Tuesday 03 June 2025 15:45:49 +0000 (0:00:05.150) 0:01:32.687 ********** 2025-06-03 15:47:30.622973 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.622977 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.622981 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.622984 | orchestrator | 2025-06-03 15:47:30.622988 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-03 15:47:30.622992 | orchestrator | Tuesday 03 June 2025 15:45:49 +0000 (0:00:00.492) 0:01:33.179 ********** 2025-06-03 15:47:30.622995 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-03 15:47:30.623000 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.623003 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-03 15:47:30.623007 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.623011 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-03 15:47:30.623015 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.623018 | orchestrator | 2025-06-03 15:47:30.623022 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-03 15:47:30.623026 | orchestrator | Tuesday 03 June 2025 15:45:56 +0000 (0:00:07.281) 0:01:40.461 ********** 2025-06-03 15:47:30.623030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.623043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.623048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:47:30.623056 | orchestrator | 2025-06-03 15:47:30.623060 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-03 15:47:30.623064 | orchestrator | Tuesday 03 June 2025 15:46:08 +0000 (0:00:11.252) 0:01:51.714 ********** 2025-06-03 15:47:30.623068 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:30.623071 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:30.623075 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:30.623079 | orchestrator | 2025-06-03 15:47:30.623082 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-03 15:47:30.623086 | orchestrator | Tuesday 03 June 2025 15:46:09 +0000 (0:00:00.927) 0:01:52.641 ********** 2025-06-03 15:47:30.623090 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:30.623094 | orchestrator | 2025-06-03 15:47:30.623097 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-03 15:47:30.623101 | orchestrator | Tuesday 03 June 2025 15:46:11 +0000 (0:00:02.497) 0:01:55.139 ********** 2025-06-03 15:47:30.623105 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:30.623108 | orchestrator | 2025-06-03 15:47:30.623114 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-03 15:47:30.623118 | orchestrator | Tuesday 03 June 2025 15:46:14 +0000 (0:00:02.875) 0:01:58.015 ********** 2025-06-03 15:47:30.623122 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:30.623126 | orchestrator | 2025-06-03 15:47:30.623130 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-03 15:47:30.623136 | orchestrator | Tuesday 03 June 2025 15:46:17 +0000 (0:00:02.591) 0:02:00.606 ********** 2025-06-03 15:47:30.623140 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:30.623143 | orchestrator | 2025-06-03 15:47:30.623147 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-03 15:47:30.623205 | orchestrator | Tuesday 03 June 2025 15:46:48 +0000 (0:00:31.336) 0:02:31.942 ********** 2025-06-03 15:47:30.623209 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:30.623213 | orchestrator | 2025-06-03 15:47:30.623217 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-03 15:47:30.623221 | orchestrator | Tuesday 03 June 2025 15:46:50 +0000 (0:00:02.549) 0:02:34.492 ********** 2025-06-03 15:47:30.623224 | orchestrator | 2025-06-03 15:47:30.623228 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-03 15:47:30.623232 | orchestrator | Tuesday 03 June 2025 15:46:51 +0000 (0:00:00.089) 0:02:34.581 ********** 2025-06-03 15:47:30.623235 | orchestrator | 2025-06-03 15:47:30.623239 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-03 15:47:30.623244 | orchestrator | Tuesday 03 June 2025 15:46:51 +0000 (0:00:00.065) 0:02:34.646 ********** 2025-06-03 15:47:30.623250 | orchestrator | 2025-06-03 15:47:30.623257 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-03 15:47:30.623262 | orchestrator | Tuesday 03 June 2025 15:46:51 +0000 (0:00:00.070) 0:02:34.717 ********** 2025-06-03 15:47:30.623269 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:30.623275 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:30.623281 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:30.623286 | orchestrator | 2025-06-03 15:47:30.623292 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:47:30.623305 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-03 15:47:30.623313 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:47:30.623319 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:47:30.623326 | orchestrator | 2025-06-03 15:47:30.623333 | orchestrator | 2025-06-03 15:47:30.623338 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:47:30.623342 | orchestrator | Tuesday 03 June 2025 15:47:28 +0000 (0:00:36.829) 0:03:11.546 ********** 2025-06-03 15:47:30.623346 | orchestrator | =============================================================================== 2025-06-03 15:47:30.623349 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.83s 2025-06-03 15:47:30.623353 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 31.34s 2025-06-03 15:47:30.623357 | orchestrator | glance : Check glance containers --------------------------------------- 11.25s 2025-06-03 15:47:30.623360 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.88s 2025-06-03 15:47:30.623364 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 7.28s 2025-06-03 15:47:30.623368 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.20s 2025-06-03 15:47:30.623371 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.93s 2025-06-03 15:47:30.623375 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 5.20s 2025-06-03 15:47:30.623378 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.15s 2025-06-03 15:47:30.623382 | orchestrator | glance : Copying over config.json files for services -------------------- 4.95s 2025-06-03 15:47:30.623386 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.78s 2025-06-03 15:47:30.623389 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.71s 2025-06-03 15:47:30.623393 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.63s 2025-06-03 15:47:30.623397 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.62s 2025-06-03 15:47:30.623400 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.21s 2025-06-03 15:47:30.623404 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.15s 2025-06-03 15:47:30.623408 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 4.04s 2025-06-03 15:47:30.623411 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.99s 2025-06-03 15:47:30.623415 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.78s 2025-06-03 15:47:30.623419 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.66s 2025-06-03 15:47:30.623423 | orchestrator | 2025-06-03 15:47:30 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:30.623426 | orchestrator | 2025-06-03 15:47:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:33.673850 | orchestrator | 2025-06-03 15:47:33 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:47:33.676326 | orchestrator | 2025-06-03 15:47:33 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:33.678632 | orchestrator | 2025-06-03 15:47:33 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:33.680512 | orchestrator | 2025-06-03 15:47:33 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:33.680682 | orchestrator | 2025-06-03 15:47:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:36.718588 | orchestrator | 2025-06-03 15:47:36 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:47:36.720433 | orchestrator | 2025-06-03 15:47:36 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:36.722556 | orchestrator | 2025-06-03 15:47:36 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:36.724335 | orchestrator | 2025-06-03 15:47:36 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:36.724366 | orchestrator | 2025-06-03 15:47:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:39.770848 | orchestrator | 2025-06-03 15:47:39 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:47:39.772891 | orchestrator | 2025-06-03 15:47:39 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:39.774313 | orchestrator | 2025-06-03 15:47:39 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:39.776709 | orchestrator | 2025-06-03 15:47:39 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:39.776774 | orchestrator | 2025-06-03 15:47:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:42.816122 | orchestrator | 2025-06-03 15:47:42 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:47:42.818731 | orchestrator | 2025-06-03 15:47:42 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:42.821905 | orchestrator | 2025-06-03 15:47:42 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:42.825292 | orchestrator | 2025-06-03 15:47:42 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:42.825362 | orchestrator | 2025-06-03 15:47:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:45.864350 | orchestrator | 2025-06-03 15:47:45 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:47:45.866418 | orchestrator | 2025-06-03 15:47:45 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:45.868543 | orchestrator | 2025-06-03 15:47:45 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:45.870627 | orchestrator | 2025-06-03 15:47:45 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:45.870671 | orchestrator | 2025-06-03 15:47:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:48.923399 | orchestrator | 2025-06-03 15:47:48 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:47:48.926606 | orchestrator | 2025-06-03 15:47:48 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:48.926761 | orchestrator | 2025-06-03 15:47:48 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:48.928730 | orchestrator | 2025-06-03 15:47:48 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:48.928777 | orchestrator | 2025-06-03 15:47:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:51.970584 | orchestrator | 2025-06-03 15:47:51 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:47:51.974447 | orchestrator | 2025-06-03 15:47:51 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:51.976910 | orchestrator | 2025-06-03 15:47:51 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:51.978938 | orchestrator | 2025-06-03 15:47:51 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:51.978994 | orchestrator | 2025-06-03 15:47:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:55.025757 | orchestrator | 2025-06-03 15:47:55 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:47:55.026838 | orchestrator | 2025-06-03 15:47:55 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:55.027815 | orchestrator | 2025-06-03 15:47:55 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:55.029043 | orchestrator | 2025-06-03 15:47:55 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:55.029092 | orchestrator | 2025-06-03 15:47:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:58.065735 | orchestrator | 2025-06-03 15:47:58 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:47:58.067526 | orchestrator | 2025-06-03 15:47:58 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:47:58.069395 | orchestrator | 2025-06-03 15:47:58 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:47:58.071269 | orchestrator | 2025-06-03 15:47:58 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:47:58.071316 | orchestrator | 2025-06-03 15:47:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:01.110153 | orchestrator | 2025-06-03 15:48:01 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:01.111918 | orchestrator | 2025-06-03 15:48:01 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:01.113155 | orchestrator | 2025-06-03 15:48:01 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:01.114260 | orchestrator | 2025-06-03 15:48:01 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:01.114285 | orchestrator | 2025-06-03 15:48:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:04.151840 | orchestrator | 2025-06-03 15:48:04 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:04.151899 | orchestrator | 2025-06-03 15:48:04 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:04.151909 | orchestrator | 2025-06-03 15:48:04 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:04.152172 | orchestrator | 2025-06-03 15:48:04 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:04.152188 | orchestrator | 2025-06-03 15:48:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:07.199852 | orchestrator | 2025-06-03 15:48:07 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:07.201761 | orchestrator | 2025-06-03 15:48:07 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:07.203685 | orchestrator | 2025-06-03 15:48:07 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:07.205301 | orchestrator | 2025-06-03 15:48:07 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:07.205345 | orchestrator | 2025-06-03 15:48:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:10.244051 | orchestrator | 2025-06-03 15:48:10 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:10.244481 | orchestrator | 2025-06-03 15:48:10 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:10.245414 | orchestrator | 2025-06-03 15:48:10 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:10.246410 | orchestrator | 2025-06-03 15:48:10 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:10.246476 | orchestrator | 2025-06-03 15:48:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:13.282259 | orchestrator | 2025-06-03 15:48:13 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:13.285197 | orchestrator | 2025-06-03 15:48:13 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:13.288217 | orchestrator | 2025-06-03 15:48:13 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:13.290182 | orchestrator | 2025-06-03 15:48:13 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:13.290222 | orchestrator | 2025-06-03 15:48:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:16.333463 | orchestrator | 2025-06-03 15:48:16 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:16.333561 | orchestrator | 2025-06-03 15:48:16 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:16.334569 | orchestrator | 2025-06-03 15:48:16 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:16.336342 | orchestrator | 2025-06-03 15:48:16 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:16.336396 | orchestrator | 2025-06-03 15:48:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:19.371601 | orchestrator | 2025-06-03 15:48:19 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:19.372269 | orchestrator | 2025-06-03 15:48:19 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:19.373875 | orchestrator | 2025-06-03 15:48:19 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:19.374885 | orchestrator | 2025-06-03 15:48:19 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:19.376425 | orchestrator | 2025-06-03 15:48:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:22.416879 | orchestrator | 2025-06-03 15:48:22 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:22.418603 | orchestrator | 2025-06-03 15:48:22 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:22.419665 | orchestrator | 2025-06-03 15:48:22 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:22.420407 | orchestrator | 2025-06-03 15:48:22 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:22.420501 | orchestrator | 2025-06-03 15:48:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:25.479836 | orchestrator | 2025-06-03 15:48:25 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:25.487484 | orchestrator | 2025-06-03 15:48:25 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:25.489530 | orchestrator | 2025-06-03 15:48:25 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:25.491723 | orchestrator | 2025-06-03 15:48:25 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:25.491884 | orchestrator | 2025-06-03 15:48:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:28.520011 | orchestrator | 2025-06-03 15:48:28 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:28.520198 | orchestrator | 2025-06-03 15:48:28 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:28.520859 | orchestrator | 2025-06-03 15:48:28 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:28.521422 | orchestrator | 2025-06-03 15:48:28 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:28.521575 | orchestrator | 2025-06-03 15:48:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:31.556693 | orchestrator | 2025-06-03 15:48:31 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:31.556748 | orchestrator | 2025-06-03 15:48:31 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:31.557835 | orchestrator | 2025-06-03 15:48:31 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:31.558531 | orchestrator | 2025-06-03 15:48:31 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:31.558574 | orchestrator | 2025-06-03 15:48:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:34.586359 | orchestrator | 2025-06-03 15:48:34 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:34.586495 | orchestrator | 2025-06-03 15:48:34 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:34.586990 | orchestrator | 2025-06-03 15:48:34 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:34.587612 | orchestrator | 2025-06-03 15:48:34 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:34.587653 | orchestrator | 2025-06-03 15:48:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:37.614730 | orchestrator | 2025-06-03 15:48:37 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:37.615812 | orchestrator | 2025-06-03 15:48:37 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:37.619231 | orchestrator | 2025-06-03 15:48:37 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:37.619293 | orchestrator | 2025-06-03 15:48:37 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:37.619304 | orchestrator | 2025-06-03 15:48:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:40.646677 | orchestrator | 2025-06-03 15:48:40 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:40.646858 | orchestrator | 2025-06-03 15:48:40 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:40.647427 | orchestrator | 2025-06-03 15:48:40 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:40.648256 | orchestrator | 2025-06-03 15:48:40 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:40.648295 | orchestrator | 2025-06-03 15:48:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:43.675979 | orchestrator | 2025-06-03 15:48:43 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:43.676157 | orchestrator | 2025-06-03 15:48:43 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:43.676703 | orchestrator | 2025-06-03 15:48:43 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:43.677233 | orchestrator | 2025-06-03 15:48:43 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:43.677307 | orchestrator | 2025-06-03 15:48:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:46.705603 | orchestrator | 2025-06-03 15:48:46 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:46.705916 | orchestrator | 2025-06-03 15:48:46 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:46.706374 | orchestrator | 2025-06-03 15:48:46 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state STARTED 2025-06-03 15:48:46.707007 | orchestrator | 2025-06-03 15:48:46 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:46.707040 | orchestrator | 2025-06-03 15:48:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:49.748199 | orchestrator | 2025-06-03 15:48:49 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:49.748507 | orchestrator | 2025-06-03 15:48:49 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:49.750000 | orchestrator | 2025-06-03 15:48:49 | INFO  | Task 6be4d437-c21e-4147-b09a-2bf2d7c5fad3 is in state SUCCESS 2025-06-03 15:48:49.751407 | orchestrator | 2025-06-03 15:48:49.751647 | orchestrator | 2025-06-03 15:48:49.751675 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:48:49.751730 | orchestrator | 2025-06-03 15:48:49.751755 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:48:49.751775 | orchestrator | Tuesday 03 June 2025 15:44:42 +0000 (0:00:00.706) 0:00:00.706 ********** 2025-06-03 15:48:49.751797 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:48:49.751817 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:48:49.751837 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:48:49.751856 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:48:49.751876 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:48:49.751894 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:48:49.751913 | orchestrator | 2025-06-03 15:48:49.751933 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:48:49.751953 | orchestrator | Tuesday 03 June 2025 15:44:43 +0000 (0:00:01.433) 0:00:02.139 ********** 2025-06-03 15:48:49.752009 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-03 15:48:49.752045 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-03 15:48:49.752064 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-03 15:48:49.752144 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-03 15:48:49.752165 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-03 15:48:49.752185 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-03 15:48:49.752198 | orchestrator | 2025-06-03 15:48:49.752211 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-03 15:48:49.752224 | orchestrator | 2025-06-03 15:48:49.752237 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-03 15:48:49.752249 | orchestrator | Tuesday 03 June 2025 15:44:45 +0000 (0:00:01.170) 0:00:03.310 ********** 2025-06-03 15:48:49.752262 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:48:49.752276 | orchestrator | 2025-06-03 15:48:49.752287 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-03 15:48:49.752299 | orchestrator | Tuesday 03 June 2025 15:44:46 +0000 (0:00:01.801) 0:00:05.111 ********** 2025-06-03 15:48:49.752310 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-03 15:48:49.752321 | orchestrator | 2025-06-03 15:48:49.752332 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-03 15:48:49.752387 | orchestrator | Tuesday 03 June 2025 15:44:50 +0000 (0:00:03.915) 0:00:09.026 ********** 2025-06-03 15:48:49.752412 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-03 15:48:49.752424 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-03 15:48:49.752435 | orchestrator | 2025-06-03 15:48:49.752446 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-03 15:48:49.752527 | orchestrator | Tuesday 03 June 2025 15:44:57 +0000 (0:00:06.910) 0:00:15.937 ********** 2025-06-03 15:48:49.752542 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:48:49.752553 | orchestrator | 2025-06-03 15:48:49.752563 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-03 15:48:49.752574 | orchestrator | Tuesday 03 June 2025 15:45:01 +0000 (0:00:03.671) 0:00:19.609 ********** 2025-06-03 15:48:49.752585 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:48:49.752596 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-03 15:48:49.752606 | orchestrator | 2025-06-03 15:48:49.752617 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-03 15:48:49.752627 | orchestrator | Tuesday 03 June 2025 15:45:05 +0000 (0:00:04.406) 0:00:24.015 ********** 2025-06-03 15:48:49.752638 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:48:49.752649 | orchestrator | 2025-06-03 15:48:49.752660 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-03 15:48:49.752670 | orchestrator | Tuesday 03 June 2025 15:45:09 +0000 (0:00:03.967) 0:00:27.983 ********** 2025-06-03 15:48:49.752682 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-03 15:48:49.752703 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-03 15:48:49.752722 | orchestrator | 2025-06-03 15:48:49.752741 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-03 15:48:49.752760 | orchestrator | Tuesday 03 June 2025 15:45:17 +0000 (0:00:07.946) 0:00:35.929 ********** 2025-06-03 15:48:49.752808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.752836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.752881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.752901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.752914 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.752935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.752947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.752959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.752981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.752994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.753006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.753024 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.753036 | orchestrator | 2025-06-03 15:48:49.753047 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-03 15:48:49.753058 | orchestrator | Tuesday 03 June 2025 15:45:20 +0000 (0:00:02.342) 0:00:38.272 ********** 2025-06-03 15:48:49.753093 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.753110 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:48:49.753121 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:48:49.753139 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:48:49.753150 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:48:49.753161 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:48:49.753171 | orchestrator | 2025-06-03 15:48:49.753182 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-03 15:48:49.753193 | orchestrator | Tuesday 03 June 2025 15:45:20 +0000 (0:00:00.625) 0:00:38.897 ********** 2025-06-03 15:48:49.753204 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.753214 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:48:49.753225 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:48:49.753236 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:48:49.753246 | orchestrator | 2025-06-03 15:48:49.753257 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-03 15:48:49.753268 | orchestrator | Tuesday 03 June 2025 15:45:21 +0000 (0:00:01.056) 0:00:39.953 ********** 2025-06-03 15:48:49.753279 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-03 15:48:49.753289 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-03 15:48:49.753300 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-03 15:48:49.753311 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-03 15:48:49.753323 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-03 15:48:49.753341 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-03 15:48:49.753360 | orchestrator | 2025-06-03 15:48:49.753378 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-03 15:48:49.753393 | orchestrator | Tuesday 03 June 2025 15:45:23 +0000 (0:00:02.179) 0:00:42.133 ********** 2025-06-03 15:48:49.753411 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:48:49.753424 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:48:49.753450 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:48:49.753474 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:48:49.753499 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:48:49.753518 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:48:49.753539 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:48:49.753569 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:48:49.753658 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:48:49.753693 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:48:49.753715 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:48:49.753729 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:48:49.753740 | orchestrator | 2025-06-03 15:48:49.753751 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-03 15:48:49.753770 | orchestrator | Tuesday 03 June 2025 15:45:27 +0000 (0:00:03.980) 0:00:46.114 ********** 2025-06-03 15:48:49.753781 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:48:49.753793 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:48:49.753804 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:48:49.753815 | orchestrator | 2025-06-03 15:48:49.753826 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-03 15:48:49.753837 | orchestrator | Tuesday 03 June 2025 15:45:30 +0000 (0:00:02.417) 0:00:48.531 ********** 2025-06-03 15:48:49.753858 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-03 15:48:49.753877 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-03 15:48:49.753894 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-03 15:48:49.753913 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:48:49.753932 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:48:49.753947 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:48:49.753958 | orchestrator | 2025-06-03 15:48:49.753969 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-03 15:48:49.753979 | orchestrator | Tuesday 03 June 2025 15:45:33 +0000 (0:00:03.400) 0:00:51.931 ********** 2025-06-03 15:48:49.753990 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-03 15:48:49.754001 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-03 15:48:49.754011 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-03 15:48:49.754112 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-03 15:48:49.754125 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-03 15:48:49.754136 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-03 15:48:49.754146 | orchestrator | 2025-06-03 15:48:49.754157 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-03 15:48:49.754168 | orchestrator | Tuesday 03 June 2025 15:45:34 +0000 (0:00:01.247) 0:00:53.179 ********** 2025-06-03 15:48:49.754179 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.754190 | orchestrator | 2025-06-03 15:48:49.754200 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-03 15:48:49.754211 | orchestrator | Tuesday 03 June 2025 15:45:35 +0000 (0:00:00.170) 0:00:53.349 ********** 2025-06-03 15:48:49.754222 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.754232 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:48:49.754243 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:48:49.754254 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:48:49.754264 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:48:49.754275 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:48:49.754285 | orchestrator | 2025-06-03 15:48:49.754296 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-03 15:48:49.754307 | orchestrator | Tuesday 03 June 2025 15:45:35 +0000 (0:00:00.720) 0:00:54.070 ********** 2025-06-03 15:48:49.754330 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:48:49.754342 | orchestrator | 2025-06-03 15:48:49.754353 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-03 15:48:49.754364 | orchestrator | Tuesday 03 June 2025 15:45:37 +0000 (0:00:01.391) 0:00:55.462 ********** 2025-06-03 15:48:49.754376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.754397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.754417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.754429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.754445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.754463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.754475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.754493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.754505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.754517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.754533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.754550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.754562 | orchestrator | 2025-06-03 15:48:49.754573 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-03 15:48:49.754584 | orchestrator | Tuesday 03 June 2025 15:45:40 +0000 (0:00:03.629) 0:00:59.091 ********** 2025-06-03 15:48:49.754600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:48:49.754612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.754624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:48:49.754643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.754675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:48:49.754696 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.754718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.754740 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:48:49.754760 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:48:49.754787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.754799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.754811 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:48:49.754912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.754962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.754975 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:48:49.754986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755019 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:48:49.755030 | orchestrator | 2025-06-03 15:48:49.755041 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-03 15:48:49.755052 | orchestrator | Tuesday 03 June 2025 15:45:42 +0000 (0:00:01.680) 0:01:00.771 ********** 2025-06-03 15:48:49.755064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:48:49.755117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755130 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.755141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:48:49.755153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:48:49.755184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755202 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:48:49.755213 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:48:49.755229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755252 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:48:49.755264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755293 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:48:49.755305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.755342 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:48:49.755353 | orchestrator | 2025-06-03 15:48:49.755364 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-03 15:48:49.755375 | orchestrator | Tuesday 03 June 2025 15:45:44 +0000 (0:00:01.892) 0:01:02.664 ********** 2025-06-03 15:48:49.755386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.755397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.755415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.755433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.755449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.755461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.755472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.755489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.755507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.755523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.755535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.755546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.755557 | orchestrator | 2025-06-03 15:48:49.755568 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-03 15:48:49.755579 | orchestrator | Tuesday 03 June 2025 15:45:48 +0000 (0:00:03.621) 0:01:06.285 ********** 2025-06-03 15:48:49.755590 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-03 15:48:49.755601 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:48:49.755612 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-03 15:48:49.755623 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:48:49.755634 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-03 15:48:49.755645 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-03 15:48:49.755656 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:48:49.755666 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-03 15:48:49.756113 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-03 15:48:49.756166 | orchestrator | 2025-06-03 15:48:49.756186 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-03 15:48:49.756206 | orchestrator | Tuesday 03 June 2025 15:45:50 +0000 (0:00:02.879) 0:01:09.165 ********** 2025-06-03 15:48:49.756226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.756252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.756265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.756277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.756298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.756317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.756343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.756355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.756367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.756378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.756402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.756414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.756426 | orchestrator | 2025-06-03 15:48:49.756437 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-03 15:48:49.756448 | orchestrator | Tuesday 03 June 2025 15:46:06 +0000 (0:00:15.649) 0:01:24.815 ********** 2025-06-03 15:48:49.756459 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.756529 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:48:49.756543 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:48:49.756554 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:48:49.756565 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:48:49.756576 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:48:49.756587 | orchestrator | 2025-06-03 15:48:49.756602 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-03 15:48:49.756626 | orchestrator | Tuesday 03 June 2025 15:46:11 +0000 (0:00:04.958) 0:01:29.773 ********** 2025-06-03 15:48:49.756638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:48:49.756650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.756669 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.756690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:48:49.756704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.756718 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:48:49.756734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:48:49.756760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.756779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.756807 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:48:49.756829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.756851 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:48:49.756882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.756902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.756914 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:48:49.756931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.756943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:48:49.756966 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:48:49.756977 | orchestrator | 2025-06-03 15:48:49.756988 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-03 15:48:49.756999 | orchestrator | Tuesday 03 June 2025 15:46:13 +0000 (0:00:01.532) 0:01:31.306 ********** 2025-06-03 15:48:49.757010 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.757021 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:48:49.757032 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:48:49.757043 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:48:49.757053 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:48:49.757064 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:48:49.757230 | orchestrator | 2025-06-03 15:48:49.757259 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-03 15:48:49.757271 | orchestrator | Tuesday 03 June 2025 15:46:13 +0000 (0:00:00.735) 0:01:32.041 ********** 2025-06-03 15:48:49.757298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.757311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.757332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.757344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:48:49.757366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.757385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.757396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.757412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.757424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.757442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.757459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.757476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:48:49.757488 | orchestrator | 2025-06-03 15:48:49.757499 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-03 15:48:49.757510 | orchestrator | Tuesday 03 June 2025 15:46:16 +0000 (0:00:02.806) 0:01:34.848 ********** 2025-06-03 15:48:49.757521 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.757532 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:48:49.757543 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:48:49.757554 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:48:49.757564 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:48:49.757575 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:48:49.757586 | orchestrator | 2025-06-03 15:48:49.757597 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-03 15:48:49.757608 | orchestrator | Tuesday 03 June 2025 15:46:17 +0000 (0:00:00.801) 0:01:35.649 ********** 2025-06-03 15:48:49.757619 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:48:49.757630 | orchestrator | 2025-06-03 15:48:49.757641 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-03 15:48:49.757652 | orchestrator | Tuesday 03 June 2025 15:46:19 +0000 (0:00:02.316) 0:01:37.965 ********** 2025-06-03 15:48:49.757662 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:48:49.757672 | orchestrator | 2025-06-03 15:48:49.757680 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-03 15:48:49.757692 | orchestrator | Tuesday 03 June 2025 15:46:22 +0000 (0:00:02.263) 0:01:40.229 ********** 2025-06-03 15:48:49.757700 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:48:49.757708 | orchestrator | 2025-06-03 15:48:49.757715 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:48:49.757727 | orchestrator | Tuesday 03 June 2025 15:46:43 +0000 (0:00:21.529) 0:02:01.758 ********** 2025-06-03 15:48:49.757735 | orchestrator | 2025-06-03 15:48:49.757743 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:48:49.757751 | orchestrator | Tuesday 03 June 2025 15:46:43 +0000 (0:00:00.164) 0:02:01.923 ********** 2025-06-03 15:48:49.757759 | orchestrator | 2025-06-03 15:48:49.757766 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:48:49.757774 | orchestrator | Tuesday 03 June 2025 15:46:43 +0000 (0:00:00.119) 0:02:02.042 ********** 2025-06-03 15:48:49.757782 | orchestrator | 2025-06-03 15:48:49.757803 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:48:49.757812 | orchestrator | Tuesday 03 June 2025 15:46:43 +0000 (0:00:00.071) 0:02:02.114 ********** 2025-06-03 15:48:49.757819 | orchestrator | 2025-06-03 15:48:49.757827 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:48:49.757835 | orchestrator | Tuesday 03 June 2025 15:46:44 +0000 (0:00:00.087) 0:02:02.201 ********** 2025-06-03 15:48:49.757843 | orchestrator | 2025-06-03 15:48:49.757860 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:48:49.757876 | orchestrator | Tuesday 03 June 2025 15:46:44 +0000 (0:00:00.112) 0:02:02.313 ********** 2025-06-03 15:48:49.757884 | orchestrator | 2025-06-03 15:48:49.757892 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-03 15:48:49.757900 | orchestrator | Tuesday 03 June 2025 15:46:44 +0000 (0:00:00.064) 0:02:02.378 ********** 2025-06-03 15:48:49.757908 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:48:49.757916 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:48:49.757924 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:48:49.757932 | orchestrator | 2025-06-03 15:48:49.757940 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-03 15:48:49.757948 | orchestrator | Tuesday 03 June 2025 15:47:11 +0000 (0:00:27.666) 0:02:30.044 ********** 2025-06-03 15:48:49.757956 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:48:49.757964 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:48:49.757973 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:48:49.757981 | orchestrator | 2025-06-03 15:48:49.757989 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-03 15:48:49.757997 | orchestrator | Tuesday 03 June 2025 15:47:20 +0000 (0:00:09.129) 0:02:39.174 ********** 2025-06-03 15:48:49.758005 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:48:49.758013 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:48:49.758047 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:48:49.758055 | orchestrator | 2025-06-03 15:48:49.758064 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-03 15:48:49.758094 | orchestrator | Tuesday 03 June 2025 15:48:32 +0000 (0:01:11.563) 0:03:50.738 ********** 2025-06-03 15:48:49.758107 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:48:49.758115 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:48:49.758123 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:48:49.758131 | orchestrator | 2025-06-03 15:48:49.758139 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-03 15:48:49.758147 | orchestrator | Tuesday 03 June 2025 15:48:46 +0000 (0:00:13.581) 0:04:04.319 ********** 2025-06-03 15:48:49.758156 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:48:49.758164 | orchestrator | 2025-06-03 15:48:49.758172 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:48:49.758186 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:48:49.758200 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-03 15:48:49.758208 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-03 15:48:49.758217 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-03 15:48:49.758224 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-03 15:48:49.758232 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-03 15:48:49.758240 | orchestrator | 2025-06-03 15:48:49.758248 | orchestrator | 2025-06-03 15:48:49.758256 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:48:49.758264 | orchestrator | Tuesday 03 June 2025 15:48:47 +0000 (0:00:01.218) 0:04:05.538 ********** 2025-06-03 15:48:49.758272 | orchestrator | =============================================================================== 2025-06-03 15:48:49.758280 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 71.56s 2025-06-03 15:48:49.758288 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.67s 2025-06-03 15:48:49.758296 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.53s 2025-06-03 15:48:49.758303 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.65s 2025-06-03 15:48:49.758311 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 13.58s 2025-06-03 15:48:49.758319 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.13s 2025-06-03 15:48:49.758327 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.95s 2025-06-03 15:48:49.758338 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.91s 2025-06-03 15:48:49.758346 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 4.96s 2025-06-03 15:48:49.758354 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.41s 2025-06-03 15:48:49.758362 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.98s 2025-06-03 15:48:49.758370 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.97s 2025-06-03 15:48:49.758377 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.92s 2025-06-03 15:48:49.758385 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.67s 2025-06-03 15:48:49.758393 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.63s 2025-06-03 15:48:49.758401 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.62s 2025-06-03 15:48:49.758409 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.40s 2025-06-03 15:48:49.758417 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.88s 2025-06-03 15:48:49.758425 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.81s 2025-06-03 15:48:49.758433 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.42s 2025-06-03 15:48:49.758440 | orchestrator | 2025-06-03 15:48:49 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:48:49.758448 | orchestrator | 2025-06-03 15:48:49 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:49.758456 | orchestrator | 2025-06-03 15:48:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:52.781899 | orchestrator | 2025-06-03 15:48:52 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:52.782264 | orchestrator | 2025-06-03 15:48:52 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:52.782870 | orchestrator | 2025-06-03 15:48:52 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:48:52.783912 | orchestrator | 2025-06-03 15:48:52 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:52.783953 | orchestrator | 2025-06-03 15:48:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:55.813260 | orchestrator | 2025-06-03 15:48:55 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:55.816420 | orchestrator | 2025-06-03 15:48:55 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:55.816545 | orchestrator | 2025-06-03 15:48:55 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:48:55.816573 | orchestrator | 2025-06-03 15:48:55 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:55.816591 | orchestrator | 2025-06-03 15:48:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:58.844622 | orchestrator | 2025-06-03 15:48:58 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:48:58.845772 | orchestrator | 2025-06-03 15:48:58 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:48:58.846342 | orchestrator | 2025-06-03 15:48:58 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:48:58.847208 | orchestrator | 2025-06-03 15:48:58 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:48:58.847249 | orchestrator | 2025-06-03 15:48:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:01.883439 | orchestrator | 2025-06-03 15:49:01 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:01.883868 | orchestrator | 2025-06-03 15:49:01 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:01.885361 | orchestrator | 2025-06-03 15:49:01 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:01.885408 | orchestrator | 2025-06-03 15:49:01 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:01.885421 | orchestrator | 2025-06-03 15:49:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:04.912677 | orchestrator | 2025-06-03 15:49:04 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:04.914697 | orchestrator | 2025-06-03 15:49:04 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:04.915289 | orchestrator | 2025-06-03 15:49:04 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:04.915925 | orchestrator | 2025-06-03 15:49:04 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:04.915956 | orchestrator | 2025-06-03 15:49:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:07.955957 | orchestrator | 2025-06-03 15:49:07 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:07.956252 | orchestrator | 2025-06-03 15:49:07 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:07.963963 | orchestrator | 2025-06-03 15:49:07 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:07.964357 | orchestrator | 2025-06-03 15:49:07 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:07.964419 | orchestrator | 2025-06-03 15:49:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:11.007212 | orchestrator | 2025-06-03 15:49:11 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:11.007632 | orchestrator | 2025-06-03 15:49:11 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:11.008592 | orchestrator | 2025-06-03 15:49:11 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:11.009403 | orchestrator | 2025-06-03 15:49:11 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:11.009438 | orchestrator | 2025-06-03 15:49:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:14.056028 | orchestrator | 2025-06-03 15:49:14 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:14.057242 | orchestrator | 2025-06-03 15:49:14 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:14.057855 | orchestrator | 2025-06-03 15:49:14 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:14.058513 | orchestrator | 2025-06-03 15:49:14 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:14.058623 | orchestrator | 2025-06-03 15:49:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:17.093725 | orchestrator | 2025-06-03 15:49:17 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:17.094120 | orchestrator | 2025-06-03 15:49:17 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:17.095280 | orchestrator | 2025-06-03 15:49:17 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:17.096354 | orchestrator | 2025-06-03 15:49:17 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:17.096396 | orchestrator | 2025-06-03 15:49:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:20.134303 | orchestrator | 2025-06-03 15:49:20 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:20.141387 | orchestrator | 2025-06-03 15:49:20 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:20.141575 | orchestrator | 2025-06-03 15:49:20 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:20.142449 | orchestrator | 2025-06-03 15:49:20 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:20.142481 | orchestrator | 2025-06-03 15:49:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:23.180603 | orchestrator | 2025-06-03 15:49:23 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:23.180748 | orchestrator | 2025-06-03 15:49:23 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:23.181465 | orchestrator | 2025-06-03 15:49:23 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:23.182392 | orchestrator | 2025-06-03 15:49:23 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:23.182441 | orchestrator | 2025-06-03 15:49:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:26.213232 | orchestrator | 2025-06-03 15:49:26 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:26.214215 | orchestrator | 2025-06-03 15:49:26 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:26.214717 | orchestrator | 2025-06-03 15:49:26 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:26.215526 | orchestrator | 2025-06-03 15:49:26 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:26.215589 | orchestrator | 2025-06-03 15:49:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:29.258854 | orchestrator | 2025-06-03 15:49:29 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:29.260926 | orchestrator | 2025-06-03 15:49:29 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:29.266937 | orchestrator | 2025-06-03 15:49:29 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:29.275360 | orchestrator | 2025-06-03 15:49:29 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:29.275839 | orchestrator | 2025-06-03 15:49:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:32.314721 | orchestrator | 2025-06-03 15:49:32 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:32.314977 | orchestrator | 2025-06-03 15:49:32 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:32.315786 | orchestrator | 2025-06-03 15:49:32 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:32.316421 | orchestrator | 2025-06-03 15:49:32 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:32.316500 | orchestrator | 2025-06-03 15:49:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:35.350402 | orchestrator | 2025-06-03 15:49:35 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:35.356042 | orchestrator | 2025-06-03 15:49:35 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:35.358852 | orchestrator | 2025-06-03 15:49:35 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:35.362245 | orchestrator | 2025-06-03 15:49:35 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:35.362291 | orchestrator | 2025-06-03 15:49:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:38.393982 | orchestrator | 2025-06-03 15:49:38 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:38.394554 | orchestrator | 2025-06-03 15:49:38 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:38.396190 | orchestrator | 2025-06-03 15:49:38 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:38.396238 | orchestrator | 2025-06-03 15:49:38 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:38.396251 | orchestrator | 2025-06-03 15:49:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:41.419741 | orchestrator | 2025-06-03 15:49:41 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:41.420301 | orchestrator | 2025-06-03 15:49:41 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:41.420830 | orchestrator | 2025-06-03 15:49:41 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:41.422947 | orchestrator | 2025-06-03 15:49:41 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:41.422981 | orchestrator | 2025-06-03 15:49:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:44.445960 | orchestrator | 2025-06-03 15:49:44 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:44.447780 | orchestrator | 2025-06-03 15:49:44 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:44.448290 | orchestrator | 2025-06-03 15:49:44 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:44.449101 | orchestrator | 2025-06-03 15:49:44 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:44.449158 | orchestrator | 2025-06-03 15:49:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:47.479890 | orchestrator | 2025-06-03 15:49:47 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state STARTED 2025-06-03 15:49:47.480339 | orchestrator | 2025-06-03 15:49:47 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:47.481476 | orchestrator | 2025-06-03 15:49:47 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:47.482299 | orchestrator | 2025-06-03 15:49:47 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:47.482337 | orchestrator | 2025-06-03 15:49:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:50.507951 | orchestrator | 2025-06-03 15:49:50.508125 | orchestrator | 2025-06-03 15:49:50 | INFO  | Task d95f5cb4-bb0a-4720-9103-9800b54043b0 is in state SUCCESS 2025-06-03 15:49:50.509951 | orchestrator | 2025-06-03 15:49:50.510172 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:49:50.510233 | orchestrator | 2025-06-03 15:49:50.510244 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:49:50.510253 | orchestrator | Tuesday 03 June 2025 15:47:34 +0000 (0:00:00.291) 0:00:00.291 ********** 2025-06-03 15:49:50.510263 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:49:50.510272 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:49:50.510280 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:49:50.510289 | orchestrator | 2025-06-03 15:49:50.510298 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:49:50.510306 | orchestrator | Tuesday 03 June 2025 15:47:34 +0000 (0:00:00.310) 0:00:00.602 ********** 2025-06-03 15:49:50.510314 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-03 15:49:50.510324 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-03 15:49:50.510332 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-03 15:49:50.510341 | orchestrator | 2025-06-03 15:49:50.510349 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-03 15:49:50.510357 | orchestrator | 2025-06-03 15:49:50.510365 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-03 15:49:50.510374 | orchestrator | Tuesday 03 June 2025 15:47:35 +0000 (0:00:00.448) 0:00:01.051 ********** 2025-06-03 15:49:50.510383 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:49:50.510392 | orchestrator | 2025-06-03 15:49:50.510411 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-03 15:49:50.510429 | orchestrator | Tuesday 03 June 2025 15:47:35 +0000 (0:00:00.548) 0:00:01.600 ********** 2025-06-03 15:49:50.510438 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-03 15:49:50.510446 | orchestrator | 2025-06-03 15:49:50.510455 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-03 15:49:50.510497 | orchestrator | Tuesday 03 June 2025 15:47:39 +0000 (0:00:03.562) 0:00:05.163 ********** 2025-06-03 15:49:50.510507 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-03 15:49:50.510516 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-03 15:49:50.510524 | orchestrator | 2025-06-03 15:49:50.510534 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-03 15:49:50.510571 | orchestrator | Tuesday 03 June 2025 15:47:46 +0000 (0:00:06.908) 0:00:12.072 ********** 2025-06-03 15:49:50.510577 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:49:50.510583 | orchestrator | 2025-06-03 15:49:50.510589 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-03 15:49:50.510595 | orchestrator | Tuesday 03 June 2025 15:47:49 +0000 (0:00:03.408) 0:00:15.481 ********** 2025-06-03 15:49:50.510601 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:49:50.510607 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-03 15:49:50.510613 | orchestrator | 2025-06-03 15:49:50.510618 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-03 15:49:50.510624 | orchestrator | Tuesday 03 June 2025 15:47:53 +0000 (0:00:04.024) 0:00:19.505 ********** 2025-06-03 15:49:50.510630 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:49:50.510635 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-03 15:49:50.510642 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-03 15:49:50.510647 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-03 15:49:50.510653 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-03 15:49:50.510659 | orchestrator | 2025-06-03 15:49:50.510665 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-03 15:49:50.510671 | orchestrator | Tuesday 03 June 2025 15:48:10 +0000 (0:00:16.549) 0:00:36.055 ********** 2025-06-03 15:49:50.510677 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-03 15:49:50.510682 | orchestrator | 2025-06-03 15:49:50.510688 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-03 15:49:50.510694 | orchestrator | Tuesday 03 June 2025 15:48:15 +0000 (0:00:05.397) 0:00:41.452 ********** 2025-06-03 15:49:50.510703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.510755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.510762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.510774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.510782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.510787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.510803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.510811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.510822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.510828 | orchestrator | 2025-06-03 15:49:50.510834 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-03 15:49:50.510840 | orchestrator | Tuesday 03 June 2025 15:48:18 +0000 (0:00:02.392) 0:00:43.844 ********** 2025-06-03 15:49:50.510845 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-03 15:49:50.510851 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-03 15:49:50.510857 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-03 15:49:50.510863 | orchestrator | 2025-06-03 15:49:50.510869 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-03 15:49:50.510875 | orchestrator | Tuesday 03 June 2025 15:48:19 +0000 (0:00:01.414) 0:00:45.259 ********** 2025-06-03 15:49:50.510881 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.510887 | orchestrator | 2025-06-03 15:49:50.510892 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-03 15:49:50.510897 | orchestrator | Tuesday 03 June 2025 15:48:19 +0000 (0:00:00.237) 0:00:45.496 ********** 2025-06-03 15:49:50.510902 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.510907 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.510912 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.510918 | orchestrator | 2025-06-03 15:49:50.510923 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-03 15:49:50.510928 | orchestrator | Tuesday 03 June 2025 15:48:20 +0000 (0:00:00.968) 0:00:46.465 ********** 2025-06-03 15:49:50.510933 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:49:50.510938 | orchestrator | 2025-06-03 15:49:50.510944 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-03 15:49:50.510949 | orchestrator | Tuesday 03 June 2025 15:48:21 +0000 (0:00:00.576) 0:00:47.042 ********** 2025-06-03 15:49:50.510954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.510965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511087 | orchestrator | 2025-06-03 15:49:50.511092 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-03 15:49:50.511097 | orchestrator | Tuesday 03 June 2025 15:48:25 +0000 (0:00:04.234) 0:00:51.276 ********** 2025-06-03 15:49:50.511102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:49:50.511107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511119 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.511132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:49:50.511150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511160 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.511165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:49:50.511170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511187 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.511192 | orchestrator | 2025-06-03 15:49:50.511200 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-03 15:49:50.511206 | orchestrator | Tuesday 03 June 2025 15:48:27 +0000 (0:00:01.583) 0:00:52.859 ********** 2025-06-03 15:49:50.511211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:49:50.511216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511226 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.511231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:49:50.511240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511256 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.511262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:49:50.511267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511277 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.511282 | orchestrator | 2025-06-03 15:49:50.511287 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-03 15:49:50.511291 | orchestrator | Tuesday 03 June 2025 15:48:28 +0000 (0:00:01.031) 0:00:53.891 ********** 2025-06-03 15:49:50.511297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511366 | orchestrator | 2025-06-03 15:49:50.511371 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-03 15:49:50.511376 | orchestrator | Tuesday 03 June 2025 15:48:31 +0000 (0:00:03.568) 0:00:57.460 ********** 2025-06-03 15:49:50.511381 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.511386 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:49:50.511391 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:49:50.511396 | orchestrator | 2025-06-03 15:49:50.511400 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-03 15:49:50.511405 | orchestrator | Tuesday 03 June 2025 15:48:34 +0000 (0:00:03.199) 0:01:00.659 ********** 2025-06-03 15:49:50.511410 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:49:50.511415 | orchestrator | 2025-06-03 15:49:50.511420 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-03 15:49:50.511424 | orchestrator | Tuesday 03 June 2025 15:48:36 +0000 (0:00:01.361) 0:01:02.021 ********** 2025-06-03 15:49:50.511429 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.511434 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.511439 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.511444 | orchestrator | 2025-06-03 15:49:50.511449 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-03 15:49:50.511457 | orchestrator | Tuesday 03 June 2025 15:48:37 +0000 (0:00:01.088) 0:01:03.110 ********** 2025-06-03 15:49:50.511463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511526 | orchestrator | 2025-06-03 15:49:50.511531 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-03 15:49:50.511536 | orchestrator | Tuesday 03 June 2025 15:48:48 +0000 (0:00:10.829) 0:01:13.939 ********** 2025-06-03 15:49:50.511541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:49:50.511551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511561 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.511572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:49:50.511577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511587 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.511593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:49:50.511604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.511614 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.511619 | orchestrator | 2025-06-03 15:49:50.511624 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-03 15:49:50.511629 | orchestrator | Tuesday 03 June 2025 15:48:49 +0000 (0:00:01.733) 0:01:15.673 ********** 2025-06-03 15:49:50.511641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:49:50.511661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.511704 | orchestrator | 2025-06-03 15:49:50.511709 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-03 15:49:50.511714 | orchestrator | Tuesday 03 June 2025 15:48:53 +0000 (0:00:03.353) 0:01:19.027 ********** 2025-06-03 15:49:50.511719 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.511723 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.511729 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.511734 | orchestrator | 2025-06-03 15:49:50.511739 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-03 15:49:50.511744 | orchestrator | Tuesday 03 June 2025 15:48:53 +0000 (0:00:00.540) 0:01:19.567 ********** 2025-06-03 15:49:50.511748 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.511753 | orchestrator | 2025-06-03 15:49:50.511758 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-03 15:49:50.511763 | orchestrator | Tuesday 03 June 2025 15:48:56 +0000 (0:00:02.459) 0:01:22.027 ********** 2025-06-03 15:49:50.511768 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.511773 | orchestrator | 2025-06-03 15:49:50.511777 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-03 15:49:50.511795 | orchestrator | Tuesday 03 June 2025 15:48:58 +0000 (0:00:02.437) 0:01:24.465 ********** 2025-06-03 15:49:50.511800 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.511812 | orchestrator | 2025-06-03 15:49:50.511817 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-03 15:49:50.511822 | orchestrator | Tuesday 03 June 2025 15:49:11 +0000 (0:00:12.543) 0:01:37.008 ********** 2025-06-03 15:49:50.511827 | orchestrator | 2025-06-03 15:49:50.511832 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-03 15:49:50.511836 | orchestrator | Tuesday 03 June 2025 15:49:11 +0000 (0:00:00.184) 0:01:37.192 ********** 2025-06-03 15:49:50.511841 | orchestrator | 2025-06-03 15:49:50.511846 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-03 15:49:50.511851 | orchestrator | Tuesday 03 June 2025 15:49:11 +0000 (0:00:00.142) 0:01:37.335 ********** 2025-06-03 15:49:50.511856 | orchestrator | 2025-06-03 15:49:50.511860 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-03 15:49:50.511865 | orchestrator | Tuesday 03 June 2025 15:49:11 +0000 (0:00:00.174) 0:01:37.509 ********** 2025-06-03 15:49:50.511870 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.511875 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:49:50.511880 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:49:50.511884 | orchestrator | 2025-06-03 15:49:50.511889 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-03 15:49:50.511894 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:12.875) 0:01:50.384 ********** 2025-06-03 15:49:50.511902 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.511907 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:49:50.511916 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:49:50.511925 | orchestrator | 2025-06-03 15:49:50.511930 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-03 15:49:50.511935 | orchestrator | Tuesday 03 June 2025 15:49:35 +0000 (0:00:11.172) 0:02:01.557 ********** 2025-06-03 15:49:50.511940 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.511945 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:49:50.511949 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:49:50.511954 | orchestrator | 2025-06-03 15:49:50.511959 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:49:50.511965 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:49:50.511971 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:49:50.511976 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:49:50.511981 | orchestrator | 2025-06-03 15:49:50.511986 | orchestrator | 2025-06-03 15:49:50.511990 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:49:50.511995 | orchestrator | Tuesday 03 June 2025 15:49:47 +0000 (0:00:11.489) 0:02:13.047 ********** 2025-06-03 15:49:50.512000 | orchestrator | =============================================================================== 2025-06-03 15:49:50.512005 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.55s 2025-06-03 15:49:50.512067 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.88s 2025-06-03 15:49:50.512072 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.54s 2025-06-03 15:49:50.512077 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.49s 2025-06-03 15:49:50.512082 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.17s 2025-06-03 15:49:50.512087 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.83s 2025-06-03 15:49:50.512092 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.91s 2025-06-03 15:49:50.512096 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.40s 2025-06-03 15:49:50.512101 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.23s 2025-06-03 15:49:50.512106 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.02s 2025-06-03 15:49:50.512111 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.57s 2025-06-03 15:49:50.512116 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.56s 2025-06-03 15:49:50.512120 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.41s 2025-06-03 15:49:50.512125 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.35s 2025-06-03 15:49:50.512130 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.20s 2025-06-03 15:49:50.512135 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.46s 2025-06-03 15:49:50.512140 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.44s 2025-06-03 15:49:50.512144 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.39s 2025-06-03 15:49:50.512149 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.73s 2025-06-03 15:49:50.512154 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.58s 2025-06-03 15:49:50.512159 | orchestrator | 2025-06-03 15:49:50 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:49:50.512273 | orchestrator | 2025-06-03 15:49:50 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:50.512282 | orchestrator | 2025-06-03 15:49:50 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:50.512293 | orchestrator | 2025-06-03 15:49:50 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:50.512298 | orchestrator | 2025-06-03 15:49:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:53.539519 | orchestrator | 2025-06-03 15:49:53 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:49:53.539902 | orchestrator | 2025-06-03 15:49:53 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:53.540569 | orchestrator | 2025-06-03 15:49:53 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:53.541350 | orchestrator | 2025-06-03 15:49:53 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:53.541376 | orchestrator | 2025-06-03 15:49:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:56.576252 | orchestrator | 2025-06-03 15:49:56 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:49:56.576647 | orchestrator | 2025-06-03 15:49:56 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:56.577232 | orchestrator | 2025-06-03 15:49:56 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:56.578168 | orchestrator | 2025-06-03 15:49:56 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:56.578209 | orchestrator | 2025-06-03 15:49:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:59.613068 | orchestrator | 2025-06-03 15:49:59 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:49:59.616121 | orchestrator | 2025-06-03 15:49:59 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:49:59.616679 | orchestrator | 2025-06-03 15:49:59 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:49:59.617261 | orchestrator | 2025-06-03 15:49:59 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:49:59.618318 | orchestrator | 2025-06-03 15:49:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:02.653949 | orchestrator | 2025-06-03 15:50:02 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:02.654917 | orchestrator | 2025-06-03 15:50:02 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:02.655668 | orchestrator | 2025-06-03 15:50:02 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:02.658197 | orchestrator | 2025-06-03 15:50:02 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:02.658408 | orchestrator | 2025-06-03 15:50:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:05.700138 | orchestrator | 2025-06-03 15:50:05 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:05.702454 | orchestrator | 2025-06-03 15:50:05 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:05.704683 | orchestrator | 2025-06-03 15:50:05 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:05.706794 | orchestrator | 2025-06-03 15:50:05 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:05.707566 | orchestrator | 2025-06-03 15:50:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:08.747658 | orchestrator | 2025-06-03 15:50:08 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:08.747789 | orchestrator | 2025-06-03 15:50:08 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:08.748665 | orchestrator | 2025-06-03 15:50:08 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:08.750309 | orchestrator | 2025-06-03 15:50:08 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:08.750679 | orchestrator | 2025-06-03 15:50:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:11.796289 | orchestrator | 2025-06-03 15:50:11 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:11.797295 | orchestrator | 2025-06-03 15:50:11 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:11.799342 | orchestrator | 2025-06-03 15:50:11 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:11.800344 | orchestrator | 2025-06-03 15:50:11 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:11.800516 | orchestrator | 2025-06-03 15:50:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:14.843149 | orchestrator | 2025-06-03 15:50:14 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:14.844326 | orchestrator | 2025-06-03 15:50:14 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:14.845406 | orchestrator | 2025-06-03 15:50:14 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:14.846829 | orchestrator | 2025-06-03 15:50:14 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:14.846866 | orchestrator | 2025-06-03 15:50:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:17.888465 | orchestrator | 2025-06-03 15:50:17 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:17.890500 | orchestrator | 2025-06-03 15:50:17 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:17.891750 | orchestrator | 2025-06-03 15:50:17 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:17.893738 | orchestrator | 2025-06-03 15:50:17 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:17.893832 | orchestrator | 2025-06-03 15:50:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:20.934185 | orchestrator | 2025-06-03 15:50:20 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:20.935002 | orchestrator | 2025-06-03 15:50:20 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:20.936503 | orchestrator | 2025-06-03 15:50:20 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:20.937126 | orchestrator | 2025-06-03 15:50:20 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:20.937151 | orchestrator | 2025-06-03 15:50:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:23.987920 | orchestrator | 2025-06-03 15:50:23 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:23.990830 | orchestrator | 2025-06-03 15:50:23 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:23.990957 | orchestrator | 2025-06-03 15:50:23 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:23.992474 | orchestrator | 2025-06-03 15:50:23 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:23.992735 | orchestrator | 2025-06-03 15:50:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:27.066680 | orchestrator | 2025-06-03 15:50:27 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:27.070270 | orchestrator | 2025-06-03 15:50:27 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:27.070932 | orchestrator | 2025-06-03 15:50:27 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:27.071931 | orchestrator | 2025-06-03 15:50:27 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:27.071950 | orchestrator | 2025-06-03 15:50:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:30.116901 | orchestrator | 2025-06-03 15:50:30 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:30.118382 | orchestrator | 2025-06-03 15:50:30 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:30.119673 | orchestrator | 2025-06-03 15:50:30 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:30.120897 | orchestrator | 2025-06-03 15:50:30 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:30.121006 | orchestrator | 2025-06-03 15:50:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:33.150447 | orchestrator | 2025-06-03 15:50:33 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:33.151792 | orchestrator | 2025-06-03 15:50:33 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:33.159174 | orchestrator | 2025-06-03 15:50:33 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:33.159534 | orchestrator | 2025-06-03 15:50:33 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:33.159736 | orchestrator | 2025-06-03 15:50:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:36.190141 | orchestrator | 2025-06-03 15:50:36 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:36.190837 | orchestrator | 2025-06-03 15:50:36 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:36.190880 | orchestrator | 2025-06-03 15:50:36 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:36.191291 | orchestrator | 2025-06-03 15:50:36 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:36.191430 | orchestrator | 2025-06-03 15:50:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:39.226110 | orchestrator | 2025-06-03 15:50:39 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:39.227295 | orchestrator | 2025-06-03 15:50:39 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:39.228290 | orchestrator | 2025-06-03 15:50:39 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:39.229770 | orchestrator | 2025-06-03 15:50:39 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:39.229818 | orchestrator | 2025-06-03 15:50:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:42.278303 | orchestrator | 2025-06-03 15:50:42 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:42.279543 | orchestrator | 2025-06-03 15:50:42 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:42.281047 | orchestrator | 2025-06-03 15:50:42 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:42.282557 | orchestrator | 2025-06-03 15:50:42 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:42.282602 | orchestrator | 2025-06-03 15:50:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:45.322419 | orchestrator | 2025-06-03 15:50:45 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state STARTED 2025-06-03 15:50:45.322500 | orchestrator | 2025-06-03 15:50:45 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:45.324787 | orchestrator | 2025-06-03 15:50:45 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:45.325498 | orchestrator | 2025-06-03 15:50:45 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:45.325515 | orchestrator | 2025-06-03 15:50:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:48.379752 | orchestrator | 2025-06-03 15:50:48 | INFO  | Task d8f654b4-c81b-4825-94f7-7a4cb5ef8a60 is in state SUCCESS 2025-06-03 15:50:48.381903 | orchestrator | 2025-06-03 15:50:48 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:48.384159 | orchestrator | 2025-06-03 15:50:48 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:48.385566 | orchestrator | 2025-06-03 15:50:48 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:50:48.387226 | orchestrator | 2025-06-03 15:50:48 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:48.387488 | orchestrator | 2025-06-03 15:50:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:51.413827 | orchestrator | 2025-06-03 15:50:51 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:51.414188 | orchestrator | 2025-06-03 15:50:51 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:51.414675 | orchestrator | 2025-06-03 15:50:51 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:50:51.415486 | orchestrator | 2025-06-03 15:50:51 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:51.415516 | orchestrator | 2025-06-03 15:50:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:54.438778 | orchestrator | 2025-06-03 15:50:54 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:54.440845 | orchestrator | 2025-06-03 15:50:54 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:54.443745 | orchestrator | 2025-06-03 15:50:54 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:50:54.446691 | orchestrator | 2025-06-03 15:50:54 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:54.446725 | orchestrator | 2025-06-03 15:50:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:57.497707 | orchestrator | 2025-06-03 15:50:57 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:50:57.497799 | orchestrator | 2025-06-03 15:50:57 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:50:57.497814 | orchestrator | 2025-06-03 15:50:57 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:50:57.497826 | orchestrator | 2025-06-03 15:50:57 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:50:57.497838 | orchestrator | 2025-06-03 15:50:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:00.523778 | orchestrator | 2025-06-03 15:51:00 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:00.524045 | orchestrator | 2025-06-03 15:51:00 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:00.524913 | orchestrator | 2025-06-03 15:51:00 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:00.525793 | orchestrator | 2025-06-03 15:51:00 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:00.525845 | orchestrator | 2025-06-03 15:51:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:03.560778 | orchestrator | 2025-06-03 15:51:03 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:03.564913 | orchestrator | 2025-06-03 15:51:03 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:03.567076 | orchestrator | 2025-06-03 15:51:03 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:03.569096 | orchestrator | 2025-06-03 15:51:03 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:03.569186 | orchestrator | 2025-06-03 15:51:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:06.606908 | orchestrator | 2025-06-03 15:51:06 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:06.614558 | orchestrator | 2025-06-03 15:51:06 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:06.618388 | orchestrator | 2025-06-03 15:51:06 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:06.619462 | orchestrator | 2025-06-03 15:51:06 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:06.619607 | orchestrator | 2025-06-03 15:51:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:09.664019 | orchestrator | 2025-06-03 15:51:09 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:09.664177 | orchestrator | 2025-06-03 15:51:09 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:09.664903 | orchestrator | 2025-06-03 15:51:09 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:09.665623 | orchestrator | 2025-06-03 15:51:09 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:09.665666 | orchestrator | 2025-06-03 15:51:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:12.703408 | orchestrator | 2025-06-03 15:51:12 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:12.705528 | orchestrator | 2025-06-03 15:51:12 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:12.707787 | orchestrator | 2025-06-03 15:51:12 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:12.709463 | orchestrator | 2025-06-03 15:51:12 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:12.709509 | orchestrator | 2025-06-03 15:51:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:15.753049 | orchestrator | 2025-06-03 15:51:15 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:15.755290 | orchestrator | 2025-06-03 15:51:15 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:15.756695 | orchestrator | 2025-06-03 15:51:15 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:15.757241 | orchestrator | 2025-06-03 15:51:15 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:15.757329 | orchestrator | 2025-06-03 15:51:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:18.791740 | orchestrator | 2025-06-03 15:51:18 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:18.792823 | orchestrator | 2025-06-03 15:51:18 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:18.793799 | orchestrator | 2025-06-03 15:51:18 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:18.795069 | orchestrator | 2025-06-03 15:51:18 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:18.795100 | orchestrator | 2025-06-03 15:51:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:21.838831 | orchestrator | 2025-06-03 15:51:21 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:21.839015 | orchestrator | 2025-06-03 15:51:21 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:21.840020 | orchestrator | 2025-06-03 15:51:21 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:21.840874 | orchestrator | 2025-06-03 15:51:21 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:21.841271 | orchestrator | 2025-06-03 15:51:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:24.881404 | orchestrator | 2025-06-03 15:51:24 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:24.881877 | orchestrator | 2025-06-03 15:51:24 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:24.882761 | orchestrator | 2025-06-03 15:51:24 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:24.883496 | orchestrator | 2025-06-03 15:51:24 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:24.883565 | orchestrator | 2025-06-03 15:51:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:27.917747 | orchestrator | 2025-06-03 15:51:27 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:27.918566 | orchestrator | 2025-06-03 15:51:27 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:27.921812 | orchestrator | 2025-06-03 15:51:27 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:27.922474 | orchestrator | 2025-06-03 15:51:27 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:27.922503 | orchestrator | 2025-06-03 15:51:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:30.961177 | orchestrator | 2025-06-03 15:51:30 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:30.967036 | orchestrator | 2025-06-03 15:51:30 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:30.971273 | orchestrator | 2025-06-03 15:51:30 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:30.973270 | orchestrator | 2025-06-03 15:51:30 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:30.973393 | orchestrator | 2025-06-03 15:51:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:33.998692 | orchestrator | 2025-06-03 15:51:33 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:33.999831 | orchestrator | 2025-06-03 15:51:33 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:34.004865 | orchestrator | 2025-06-03 15:51:34 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:34.013155 | orchestrator | 2025-06-03 15:51:34 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:34.013275 | orchestrator | 2025-06-03 15:51:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:37.042139 | orchestrator | 2025-06-03 15:51:37 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:37.043989 | orchestrator | 2025-06-03 15:51:37 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:37.044582 | orchestrator | 2025-06-03 15:51:37 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:37.045196 | orchestrator | 2025-06-03 15:51:37 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:37.045303 | orchestrator | 2025-06-03 15:51:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:40.082862 | orchestrator | 2025-06-03 15:51:40 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:40.083573 | orchestrator | 2025-06-03 15:51:40 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:40.084557 | orchestrator | 2025-06-03 15:51:40 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:40.085768 | orchestrator | 2025-06-03 15:51:40 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:40.085816 | orchestrator | 2025-06-03 15:51:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:43.131921 | orchestrator | 2025-06-03 15:51:43 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:43.134648 | orchestrator | 2025-06-03 15:51:43 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:43.138838 | orchestrator | 2025-06-03 15:51:43 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:43.139117 | orchestrator | 2025-06-03 15:51:43 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:43.139485 | orchestrator | 2025-06-03 15:51:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:46.204138 | orchestrator | 2025-06-03 15:51:46 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:46.205216 | orchestrator | 2025-06-03 15:51:46 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:46.206330 | orchestrator | 2025-06-03 15:51:46 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:46.207688 | orchestrator | 2025-06-03 15:51:46 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:46.207714 | orchestrator | 2025-06-03 15:51:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:49.246957 | orchestrator | 2025-06-03 15:51:49 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:49.248456 | orchestrator | 2025-06-03 15:51:49 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:49.249767 | orchestrator | 2025-06-03 15:51:49 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:49.251281 | orchestrator | 2025-06-03 15:51:49 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:49.251312 | orchestrator | 2025-06-03 15:51:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:52.310310 | orchestrator | 2025-06-03 15:51:52 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:52.312246 | orchestrator | 2025-06-03 15:51:52 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:52.314524 | orchestrator | 2025-06-03 15:51:52 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:52.317444 | orchestrator | 2025-06-03 15:51:52 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state STARTED 2025-06-03 15:51:52.317507 | orchestrator | 2025-06-03 15:51:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:55.369106 | orchestrator | 2025-06-03 15:51:55 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:55.369144 | orchestrator | 2025-06-03 15:51:55 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:55.371562 | orchestrator | 2025-06-03 15:51:55 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:55.379292 | orchestrator | 2025-06-03 15:51:55 | INFO  | Task 27847a0d-4c96-4fd7-a584-906f65999339 is in state SUCCESS 2025-06-03 15:51:55.379396 | orchestrator | 2025-06-03 15:51:55.379403 | orchestrator | 2025-06-03 15:51:55.379407 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-03 15:51:55.379411 | orchestrator | 2025-06-03 15:51:55.379415 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-03 15:51:55.379420 | orchestrator | Tuesday 03 June 2025 15:49:56 +0000 (0:00:00.335) 0:00:00.335 ********** 2025-06-03 15:51:55.379424 | orchestrator | changed: [localhost] 2025-06-03 15:51:55.379429 | orchestrator | 2025-06-03 15:51:55.379433 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-03 15:51:55.379437 | orchestrator | Tuesday 03 June 2025 15:49:57 +0000 (0:00:01.410) 0:00:01.746 ********** 2025-06-03 15:51:55.379441 | orchestrator | changed: [localhost] 2025-06-03 15:51:55.379444 | orchestrator | 2025-06-03 15:51:55.379448 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-03 15:51:55.379452 | orchestrator | Tuesday 03 June 2025 15:50:34 +0000 (0:00:36.665) 0:00:38.412 ********** 2025-06-03 15:51:55.379456 | orchestrator | changed: [localhost] 2025-06-03 15:51:55.379460 | orchestrator | 2025-06-03 15:51:55.379464 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:51:55.379467 | orchestrator | 2025-06-03 15:51:55.379471 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:51:55.379475 | orchestrator | Tuesday 03 June 2025 15:50:45 +0000 (0:00:10.795) 0:00:49.207 ********** 2025-06-03 15:51:55.379479 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:51:55.379483 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:51:55.379486 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:51:55.379490 | orchestrator | 2025-06-03 15:51:55.379494 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:51:55.379498 | orchestrator | Tuesday 03 June 2025 15:50:45 +0000 (0:00:00.687) 0:00:49.895 ********** 2025-06-03 15:51:55.379502 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-03 15:51:55.379506 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-03 15:51:55.379510 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-03 15:51:55.379514 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-03 15:51:55.379517 | orchestrator | 2025-06-03 15:51:55.379521 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-03 15:51:55.379525 | orchestrator | skipping: no hosts matched 2025-06-03 15:51:55.379529 | orchestrator | 2025-06-03 15:51:55.379533 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:51:55.379546 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:51:55.379562 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:51:55.379567 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:51:55.379571 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:51:55.379574 | orchestrator | 2025-06-03 15:51:55.379578 | orchestrator | 2025-06-03 15:51:55.379582 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:51:55.379586 | orchestrator | Tuesday 03 June 2025 15:50:46 +0000 (0:00:00.924) 0:00:50.820 ********** 2025-06-03 15:51:55.379590 | orchestrator | =============================================================================== 2025-06-03 15:51:55.379594 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 36.67s 2025-06-03 15:51:55.379597 | orchestrator | Download ironic-agent kernel ------------------------------------------- 10.80s 2025-06-03 15:51:55.379601 | orchestrator | Ensure the destination directory exists --------------------------------- 1.41s 2025-06-03 15:51:55.379605 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2025-06-03 15:51:55.379609 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2025-06-03 15:51:55.379612 | orchestrator | 2025-06-03 15:51:55.379948 | orchestrator | 2025-06-03 15:51:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:55.381023 | orchestrator | 2025-06-03 15:51:55.381044 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:51:55.381063 | orchestrator | 2025-06-03 15:51:55.381069 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:51:55.381073 | orchestrator | Tuesday 03 June 2025 15:47:25 +0000 (0:00:00.320) 0:00:00.320 ********** 2025-06-03 15:51:55.381078 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:51:55.381083 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:51:55.381087 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:51:55.381092 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:51:55.381096 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:51:55.381100 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:51:55.381105 | orchestrator | 2025-06-03 15:51:55.381110 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:51:55.381116 | orchestrator | Tuesday 03 June 2025 15:47:26 +0000 (0:00:00.997) 0:00:01.317 ********** 2025-06-03 15:51:55.381124 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-03 15:51:55.381144 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-03 15:51:55.381154 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-03 15:51:55.381160 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-03 15:51:55.381175 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-03 15:51:55.381182 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-03 15:51:55.381189 | orchestrator | 2025-06-03 15:51:55.381196 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-03 15:51:55.381203 | orchestrator | 2025-06-03 15:51:55.381210 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-03 15:51:55.381217 | orchestrator | Tuesday 03 June 2025 15:47:26 +0000 (0:00:00.767) 0:00:02.085 ********** 2025-06-03 15:51:55.381225 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:51:55.381232 | orchestrator | 2025-06-03 15:51:55.381239 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-03 15:51:55.381246 | orchestrator | Tuesday 03 June 2025 15:47:28 +0000 (0:00:01.299) 0:00:03.384 ********** 2025-06-03 15:51:55.381252 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:51:55.381259 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:51:55.381276 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:51:55.381283 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:51:55.381289 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:51:55.381296 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:51:55.381302 | orchestrator | 2025-06-03 15:51:55.381308 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-03 15:51:55.381314 | orchestrator | Tuesday 03 June 2025 15:47:29 +0000 (0:00:01.366) 0:00:04.751 ********** 2025-06-03 15:51:55.381320 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:51:55.381327 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:51:55.381334 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:51:55.381340 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:51:55.381346 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:51:55.381352 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:51:55.381358 | orchestrator | 2025-06-03 15:51:55.381364 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-03 15:51:55.381370 | orchestrator | Tuesday 03 June 2025 15:47:30 +0000 (0:00:01.098) 0:00:05.850 ********** 2025-06-03 15:51:55.381377 | orchestrator | ok: [testbed-node-0] => { 2025-06-03 15:51:55.381383 | orchestrator |  "changed": false, 2025-06-03 15:51:55.381390 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:51:55.381396 | orchestrator | } 2025-06-03 15:51:55.381402 | orchestrator | ok: [testbed-node-1] => { 2025-06-03 15:51:55.381428 | orchestrator |  "changed": false, 2025-06-03 15:51:55.381436 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:51:55.381443 | orchestrator | } 2025-06-03 15:51:55.381449 | orchestrator | ok: [testbed-node-2] => { 2025-06-03 15:51:55.381455 | orchestrator |  "changed": false, 2025-06-03 15:51:55.381462 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:51:55.381469 | orchestrator | } 2025-06-03 15:51:55.381524 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:51:55.381531 | orchestrator |  "changed": false, 2025-06-03 15:51:55.381538 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:51:55.381544 | orchestrator | } 2025-06-03 15:51:55.381558 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:51:55.381564 | orchestrator |  "changed": false, 2025-06-03 15:51:55.381571 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:51:55.381578 | orchestrator | } 2025-06-03 15:51:55.381584 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:51:55.381591 | orchestrator |  "changed": false, 2025-06-03 15:51:55.381598 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:51:55.381605 | orchestrator | } 2025-06-03 15:51:55.381611 | orchestrator | 2025-06-03 15:51:55.381618 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-03 15:51:55.381625 | orchestrator | Tuesday 03 June 2025 15:47:31 +0000 (0:00:00.828) 0:00:06.678 ********** 2025-06-03 15:51:55.381632 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.381638 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.381644 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.381651 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.381658 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.381665 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.381671 | orchestrator | 2025-06-03 15:51:55.381678 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-03 15:51:55.381684 | orchestrator | Tuesday 03 June 2025 15:47:32 +0000 (0:00:00.596) 0:00:07.275 ********** 2025-06-03 15:51:55.381691 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-03 15:51:55.381698 | orchestrator | 2025-06-03 15:51:55.381704 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-03 15:51:55.381711 | orchestrator | Tuesday 03 June 2025 15:47:35 +0000 (0:00:03.696) 0:00:10.971 ********** 2025-06-03 15:51:55.381718 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-03 15:51:55.381725 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-03 15:51:55.381738 | orchestrator | 2025-06-03 15:51:55.381753 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-03 15:51:55.381760 | orchestrator | Tuesday 03 June 2025 15:47:42 +0000 (0:00:07.093) 0:00:18.064 ********** 2025-06-03 15:51:55.381766 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:51:55.381773 | orchestrator | 2025-06-03 15:51:55.381779 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-03 15:51:55.381786 | orchestrator | Tuesday 03 June 2025 15:47:46 +0000 (0:00:03.563) 0:00:21.628 ********** 2025-06-03 15:51:55.381793 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:51:55.381799 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-03 15:51:55.381806 | orchestrator | 2025-06-03 15:51:55.381813 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-03 15:51:55.381819 | orchestrator | Tuesday 03 June 2025 15:47:50 +0000 (0:00:04.438) 0:00:26.066 ********** 2025-06-03 15:51:55.381826 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:51:55.381832 | orchestrator | 2025-06-03 15:51:55.381838 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-03 15:51:55.381845 | orchestrator | Tuesday 03 June 2025 15:47:54 +0000 (0:00:03.628) 0:00:29.695 ********** 2025-06-03 15:51:55.381852 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-03 15:51:55.381858 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-03 15:51:55.381865 | orchestrator | 2025-06-03 15:51:55.381872 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-03 15:51:55.381890 | orchestrator | Tuesday 03 June 2025 15:48:02 +0000 (0:00:08.341) 0:00:38.037 ********** 2025-06-03 15:51:55.381897 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.381903 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.381910 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.381916 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.381922 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.381928 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.381934 | orchestrator | 2025-06-03 15:51:55.381941 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-03 15:51:55.381947 | orchestrator | Tuesday 03 June 2025 15:48:03 +0000 (0:00:00.839) 0:00:38.877 ********** 2025-06-03 15:51:55.381954 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.381960 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.381987 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.381993 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.382000 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.382006 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.382067 | orchestrator | 2025-06-03 15:51:55.382078 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-03 15:51:55.382084 | orchestrator | Tuesday 03 June 2025 15:48:05 +0000 (0:00:02.227) 0:00:41.104 ********** 2025-06-03 15:51:55.382091 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:51:55.382098 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:51:55.382105 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:51:55.382111 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:51:55.382118 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:51:55.382125 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:51:55.382131 | orchestrator | 2025-06-03 15:51:55.382138 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-03 15:51:55.382144 | orchestrator | Tuesday 03 June 2025 15:48:07 +0000 (0:00:01.101) 0:00:42.206 ********** 2025-06-03 15:51:55.382151 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.382158 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.382165 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.382186 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.382194 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.382206 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.382213 | orchestrator | 2025-06-03 15:51:55.382220 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-03 15:51:55.382227 | orchestrator | Tuesday 03 June 2025 15:48:09 +0000 (0:00:02.114) 0:00:44.320 ********** 2025-06-03 15:51:55.382239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.382256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.382263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.382271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.382280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.382292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.382299 | orchestrator | 2025-06-03 15:51:55.382306 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-03 15:51:55.382312 | orchestrator | Tuesday 03 June 2025 15:48:12 +0000 (0:00:02.995) 0:00:47.315 ********** 2025-06-03 15:51:55.382319 | orchestrator | [WARNING]: Skipped 2025-06-03 15:51:55.382326 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-03 15:51:55.382333 | orchestrator | due to this access issue: 2025-06-03 15:51:55.382340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-03 15:51:55.382347 | orchestrator | a directory 2025-06-03 15:51:55.382353 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:51:55.382360 | orchestrator | 2025-06-03 15:51:55.382375 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-03 15:51:55.382382 | orchestrator | Tuesday 03 June 2025 15:48:13 +0000 (0:00:00.894) 0:00:48.210 ********** 2025-06-03 15:51:55.382389 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:51:55.382397 | orchestrator | 2025-06-03 15:51:55.382404 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-03 15:51:55.382410 | orchestrator | Tuesday 03 June 2025 15:48:14 +0000 (0:00:01.245) 0:00:49.456 ********** 2025-06-03 15:51:55.382417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.382424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.382437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.382444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.382455 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.382463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.382469 | orchestrator | 2025-06-03 15:51:55.382477 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-03 15:51:55.382483 | orchestrator | Tuesday 03 June 2025 15:48:17 +0000 (0:00:03.728) 0:00:53.184 ********** 2025-06-03 15:51:55.382494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.382501 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.382511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.382518 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.382530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.382537 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.382544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.382551 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.382558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.382570 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.382577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.382584 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.382591 | orchestrator | 2025-06-03 15:51:55.382598 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-03 15:51:55.382610 | orchestrator | Tuesday 03 June 2025 15:48:20 +0000 (0:00:02.995) 0:00:56.180 ********** 2025-06-03 15:51:55.382617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.382623 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.382634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.382641 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.382648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.382658 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.382665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.382672 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.382681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.382688 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.382695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.382702 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.382709 | orchestrator | 2025-06-03 15:51:55.382715 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-03 15:51:55.382725 | orchestrator | Tuesday 03 June 2025 15:48:24 +0000 (0:00:03.992) 0:01:00.172 ********** 2025-06-03 15:51:55.382732 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.382739 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.382746 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.382752 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.382759 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.382766 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.382772 | orchestrator | 2025-06-03 15:51:55.382779 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-03 15:51:55.382790 | orchestrator | Tuesday 03 June 2025 15:48:28 +0000 (0:00:03.095) 0:01:03.268 ********** 2025-06-03 15:51:55.382797 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.382803 | orchestrator | 2025-06-03 15:51:55.382810 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-03 15:51:55.382816 | orchestrator | Tuesday 03 June 2025 15:48:28 +0000 (0:00:00.103) 0:01:03.371 ********** 2025-06-03 15:51:55.382823 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.382830 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.382836 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.382843 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.382850 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.382856 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.382863 | orchestrator | 2025-06-03 15:51:55.382870 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-03 15:51:55.382876 | orchestrator | Tuesday 03 June 2025 15:48:29 +0000 (0:00:00.818) 0:01:04.189 ********** 2025-06-03 15:51:55.382893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.382899 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.382906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.382912 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.382921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.382928 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.382938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.382949 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.382955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.382961 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.382968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.382974 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.382980 | orchestrator | 2025-06-03 15:51:55.382987 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-03 15:51:55.382993 | orchestrator | Tuesday 03 June 2025 15:48:31 +0000 (0:00:02.586) 0:01:06.776 ********** 2025-06-03 15:51:55.383003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.383038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.383048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.383055 | orchestrator | 2025-06-03 15:51:55.383061 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-03 15:51:55.383071 | orchestrator | Tuesday 03 June 2025 15:48:36 +0000 (0:00:04.822) 0:01:11.598 ********** 2025-06-03 15:51:55.383081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.383110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.383124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.383131 | orchestrator | 2025-06-03 15:51:55.383137 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-03 15:51:55.383143 | orchestrator | Tuesday 03 June 2025 15:48:43 +0000 (0:00:07.551) 0:01:19.150 ********** 2025-06-03 15:51:55.383150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.383156 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.383172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383178 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.383195 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383218 | orchestrator | 2025-06-03 15:51:55.383225 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-03 15:51:55.383231 | orchestrator | Tuesday 03 June 2025 15:48:47 +0000 (0:00:03.968) 0:01:23.119 ********** 2025-06-03 15:51:55.383238 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383245 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:51:55.383251 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383257 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:55.383263 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383269 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:51:55.383275 | orchestrator | 2025-06-03 15:51:55.383281 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-03 15:51:55.383288 | orchestrator | Tuesday 03 June 2025 15:48:51 +0000 (0:00:03.582) 0:01:26.702 ********** 2025-06-03 15:51:55.383294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.383305 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.383323 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.383339 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.383373 | orchestrator | 2025-06-03 15:51:55.383380 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-03 15:51:55.383386 | orchestrator | Tuesday 03 June 2025 15:48:55 +0000 (0:00:04.262) 0:01:30.965 ********** 2025-06-03 15:51:55.383392 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.383399 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383404 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.383411 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.383418 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383424 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383430 | orchestrator | 2025-06-03 15:51:55.383436 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-03 15:51:55.383443 | orchestrator | Tuesday 03 June 2025 15:48:58 +0000 (0:00:02.735) 0:01:33.701 ********** 2025-06-03 15:51:55.383449 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.383455 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.383461 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.383468 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383475 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383481 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383487 | orchestrator | 2025-06-03 15:51:55.383494 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-03 15:51:55.383500 | orchestrator | Tuesday 03 June 2025 15:49:00 +0000 (0:00:02.224) 0:01:35.925 ********** 2025-06-03 15:51:55.383507 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.383513 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.383520 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.383529 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383535 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383542 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383548 | orchestrator | 2025-06-03 15:51:55.383554 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-03 15:51:55.383560 | orchestrator | Tuesday 03 June 2025 15:49:02 +0000 (0:00:01.804) 0:01:37.729 ********** 2025-06-03 15:51:55.383567 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.383573 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.383579 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383585 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.383592 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383598 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383604 | orchestrator | 2025-06-03 15:51:55.383611 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-03 15:51:55.383617 | orchestrator | Tuesday 03 June 2025 15:49:04 +0000 (0:00:01.947) 0:01:39.676 ********** 2025-06-03 15:51:55.383624 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.383630 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.383636 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.383642 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383648 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383654 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383661 | orchestrator | 2025-06-03 15:51:55.383670 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-03 15:51:55.383676 | orchestrator | Tuesday 03 June 2025 15:49:06 +0000 (0:00:01.958) 0:01:41.635 ********** 2025-06-03 15:51:55.383683 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.383689 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383695 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383702 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.383708 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.383715 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383721 | orchestrator | 2025-06-03 15:51:55.383728 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-03 15:51:55.383734 | orchestrator | Tuesday 03 June 2025 15:49:08 +0000 (0:00:01.946) 0:01:43.582 ********** 2025-06-03 15:51:55.383740 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:51:55.383746 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.383753 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:51:55.383759 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383765 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:51:55.383771 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.383777 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:51:55.383784 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.383790 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:51:55.383796 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383803 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:51:55.383809 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383815 | orchestrator | 2025-06-03 15:51:55.383821 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-03 15:51:55.383828 | orchestrator | Tuesday 03 June 2025 15:49:10 +0000 (0:00:02.064) 0:01:45.647 ********** 2025-06-03 15:51:55.383837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.383844 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.383854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.383865 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.383871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.383877 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.383902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.383909 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.383916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.383922 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.383931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.383939 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.383945 | orchestrator | 2025-06-03 15:51:55.383951 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-03 15:51:55.383957 | orchestrator | Tuesday 03 June 2025 15:49:12 +0000 (0:00:02.514) 0:01:48.161 ********** 2025-06-03 15:51:55.383968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.383976 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.383981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.383986 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.383992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.383998 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.384014 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.384033 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.384052 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384058 | orchestrator | 2025-06-03 15:51:55.384065 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-03 15:51:55.384071 | orchestrator | Tuesday 03 June 2025 15:49:15 +0000 (0:00:02.437) 0:01:50.598 ********** 2025-06-03 15:51:55.384078 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384084 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384090 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384096 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384103 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384109 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384115 | orchestrator | 2025-06-03 15:51:55.384122 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-03 15:51:55.384128 | orchestrator | Tuesday 03 June 2025 15:49:17 +0000 (0:00:02.170) 0:01:52.769 ********** 2025-06-03 15:51:55.384135 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384141 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384147 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384153 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:51:55.384159 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:51:55.384165 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:51:55.384171 | orchestrator | 2025-06-03 15:51:55.384177 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-03 15:51:55.384184 | orchestrator | Tuesday 03 June 2025 15:49:21 +0000 (0:00:04.320) 0:01:57.090 ********** 2025-06-03 15:51:55.384190 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384196 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384202 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384208 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384214 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384220 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384227 | orchestrator | 2025-06-03 15:51:55.384234 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-03 15:51:55.384240 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:02.187) 0:01:59.278 ********** 2025-06-03 15:51:55.384246 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384253 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384259 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384265 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384271 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384277 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384284 | orchestrator | 2025-06-03 15:51:55.384294 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-03 15:51:55.384300 | orchestrator | Tuesday 03 June 2025 15:49:27 +0000 (0:00:03.324) 0:02:02.603 ********** 2025-06-03 15:51:55.384306 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384312 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384319 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384324 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384331 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384337 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384343 | orchestrator | 2025-06-03 15:51:55.384354 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-03 15:51:55.384360 | orchestrator | Tuesday 03 June 2025 15:49:30 +0000 (0:00:03.234) 0:02:05.837 ********** 2025-06-03 15:51:55.384367 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384373 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384379 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384385 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384391 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384398 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384404 | orchestrator | 2025-06-03 15:51:55.384410 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-03 15:51:55.384416 | orchestrator | Tuesday 03 June 2025 15:49:33 +0000 (0:00:02.697) 0:02:08.535 ********** 2025-06-03 15:51:55.384422 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384428 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384434 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384440 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384447 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384453 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384459 | orchestrator | 2025-06-03 15:51:55.384466 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-03 15:51:55.384472 | orchestrator | Tuesday 03 June 2025 15:49:35 +0000 (0:00:02.499) 0:02:11.034 ********** 2025-06-03 15:51:55.384478 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384484 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384491 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384497 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384504 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384510 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384516 | orchestrator | 2025-06-03 15:51:55.384522 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-03 15:51:55.384529 | orchestrator | Tuesday 03 June 2025 15:49:39 +0000 (0:00:03.736) 0:02:14.771 ********** 2025-06-03 15:51:55.384535 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384545 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384551 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384557 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384563 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384570 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384576 | orchestrator | 2025-06-03 15:51:55.384582 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-03 15:51:55.384588 | orchestrator | Tuesday 03 June 2025 15:49:42 +0000 (0:00:03.315) 0:02:18.086 ********** 2025-06-03 15:51:55.384595 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384601 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384608 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384614 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384620 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384626 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384632 | orchestrator | 2025-06-03 15:51:55.384638 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-03 15:51:55.384645 | orchestrator | Tuesday 03 June 2025 15:49:45 +0000 (0:00:02.907) 0:02:20.994 ********** 2025-06-03 15:51:55.384663 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:51:55.384669 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384676 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:51:55.384682 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384688 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:51:55.384695 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384702 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:51:55.384708 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384715 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:51:55.384722 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384729 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:51:55.384735 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384741 | orchestrator | 2025-06-03 15:51:55.384747 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-03 15:51:55.384753 | orchestrator | Tuesday 03 June 2025 15:49:48 +0000 (0:00:03.130) 0:02:24.124 ********** 2025-06-03 15:51:55.384760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.384767 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.384777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.384783 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.384825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.384837 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.384844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:51:55.384850 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.384856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.384863 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.384874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:51:55.384895 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.384901 | orchestrator | 2025-06-03 15:51:55.384908 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-03 15:51:55.384914 | orchestrator | Tuesday 03 June 2025 15:49:52 +0000 (0:00:03.947) 0:02:28.072 ********** 2025-06-03 15:51:55.384921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.384935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.384942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:51:55.384949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.384957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.384964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:51:55.384974 | orchestrator | 2025-06-03 15:51:55.384980 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-03 15:51:55.384989 | orchestrator | Tuesday 03 June 2025 15:49:57 +0000 (0:00:04.863) 0:02:32.936 ********** 2025-06-03 15:51:55.384996 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:55.385002 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:55.385009 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:55.385015 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:55.385021 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:55.385027 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:55.385033 | orchestrator | 2025-06-03 15:51:55.385039 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-03 15:51:55.385046 | orchestrator | Tuesday 03 June 2025 15:49:58 +0000 (0:00:00.528) 0:02:33.465 ********** 2025-06-03 15:51:55.385052 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:55.385058 | orchestrator | 2025-06-03 15:51:55.385065 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-03 15:51:55.385071 | orchestrator | Tuesday 03 June 2025 15:50:00 +0000 (0:00:02.363) 0:02:35.829 ********** 2025-06-03 15:51:55.385077 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:55.385083 | orchestrator | 2025-06-03 15:51:55.385089 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-03 15:51:55.385096 | orchestrator | Tuesday 03 June 2025 15:50:02 +0000 (0:00:02.217) 0:02:38.046 ********** 2025-06-03 15:51:55.385102 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:55.385108 | orchestrator | 2025-06-03 15:51:55.385115 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:51:55.385122 | orchestrator | Tuesday 03 June 2025 15:50:50 +0000 (0:00:47.857) 0:03:25.904 ********** 2025-06-03 15:51:55.385128 | orchestrator | 2025-06-03 15:51:55.385134 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:51:55.385140 | orchestrator | Tuesday 03 June 2025 15:50:50 +0000 (0:00:00.138) 0:03:26.042 ********** 2025-06-03 15:51:55.385146 | orchestrator | 2025-06-03 15:51:55.385152 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:51:55.385158 | orchestrator | Tuesday 03 June 2025 15:50:51 +0000 (0:00:00.235) 0:03:26.278 ********** 2025-06-03 15:51:55.385165 | orchestrator | 2025-06-03 15:51:55.385171 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:51:55.385177 | orchestrator | Tuesday 03 June 2025 15:50:51 +0000 (0:00:00.067) 0:03:26.345 ********** 2025-06-03 15:51:55.385184 | orchestrator | 2025-06-03 15:51:55.385190 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:51:55.385196 | orchestrator | Tuesday 03 June 2025 15:50:51 +0000 (0:00:00.102) 0:03:26.448 ********** 2025-06-03 15:51:55.385203 | orchestrator | 2025-06-03 15:51:55.385209 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:51:55.385215 | orchestrator | Tuesday 03 June 2025 15:50:51 +0000 (0:00:00.138) 0:03:26.586 ********** 2025-06-03 15:51:55.385222 | orchestrator | 2025-06-03 15:51:55.385228 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-03 15:51:55.385234 | orchestrator | Tuesday 03 June 2025 15:50:51 +0000 (0:00:00.130) 0:03:26.717 ********** 2025-06-03 15:51:55.385241 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:55.385247 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:51:55.385253 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:51:55.385259 | orchestrator | 2025-06-03 15:51:55.385265 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-03 15:51:55.385271 | orchestrator | Tuesday 03 June 2025 15:51:24 +0000 (0:00:33.444) 0:04:00.162 ********** 2025-06-03 15:51:55.385277 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:51:55.385288 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:51:55.385295 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:51:55.385301 | orchestrator | 2025-06-03 15:51:55.385307 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:51:55.385314 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-03 15:51:55.385324 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-03 15:51:55.385330 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-03 15:51:55.385337 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-03 15:51:55.385343 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-03 15:51:55.385349 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-03 15:51:55.385356 | orchestrator | 2025-06-03 15:51:55.385362 | orchestrator | 2025-06-03 15:51:55.385369 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:51:55.385375 | orchestrator | Tuesday 03 June 2025 15:51:53 +0000 (0:00:28.989) 0:04:29.152 ********** 2025-06-03 15:51:55.385381 | orchestrator | =============================================================================== 2025-06-03 15:51:55.385388 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 47.86s 2025-06-03 15:51:55.385394 | orchestrator | neutron : Restart neutron-server container ----------------------------- 33.44s 2025-06-03 15:51:55.385400 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 28.99s 2025-06-03 15:51:55.385408 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.34s 2025-06-03 15:51:55.385419 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.55s 2025-06-03 15:51:55.385426 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.09s 2025-06-03 15:51:55.385433 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.86s 2025-06-03 15:51:55.385439 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.82s 2025-06-03 15:51:55.385445 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.44s 2025-06-03 15:51:55.385452 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.32s 2025-06-03 15:51:55.385458 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.26s 2025-06-03 15:51:55.385465 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.99s 2025-06-03 15:51:55.385471 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.97s 2025-06-03 15:51:55.385478 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.95s 2025-06-03 15:51:55.385484 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.74s 2025-06-03 15:51:55.385490 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.73s 2025-06-03 15:51:55.385493 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.70s 2025-06-03 15:51:55.385497 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.63s 2025-06-03 15:51:55.385501 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.58s 2025-06-03 15:51:55.385505 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.56s 2025-06-03 15:51:58.423365 | orchestrator | 2025-06-03 15:51:58 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:51:58.425486 | orchestrator | 2025-06-03 15:51:58 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:51:58.426482 | orchestrator | 2025-06-03 15:51:58 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:51:58.428179 | orchestrator | 2025-06-03 15:51:58 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:51:58.428207 | orchestrator | 2025-06-03 15:51:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:01.476730 | orchestrator | 2025-06-03 15:52:01 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:01.477082 | orchestrator | 2025-06-03 15:52:01 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:01.477897 | orchestrator | 2025-06-03 15:52:01 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state STARTED 2025-06-03 15:52:01.478477 | orchestrator | 2025-06-03 15:52:01 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:52:01.478507 | orchestrator | 2025-06-03 15:52:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:04.526223 | orchestrator | 2025-06-03 15:52:04 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:04.530590 | orchestrator | 2025-06-03 15:52:04 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:04.533051 | orchestrator | 2025-06-03 15:52:04 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:04.538658 | orchestrator | 2025-06-03 15:52:04 | INFO  | Task 647234c5-8d9a-4a07-b9c3-4d1b01e6abac is in state SUCCESS 2025-06-03 15:52:04.540830 | orchestrator | 2025-06-03 15:52:04.540915 | orchestrator | 2025-06-03 15:52:04.540925 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:52:04.540934 | orchestrator | 2025-06-03 15:52:04.540938 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:52:04.540943 | orchestrator | Tuesday 03 June 2025 15:48:55 +0000 (0:00:00.303) 0:00:00.303 ********** 2025-06-03 15:52:04.540948 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:52:04.540952 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:52:04.540956 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:52:04.540961 | orchestrator | 2025-06-03 15:52:04.540984 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:52:04.540990 | orchestrator | Tuesday 03 June 2025 15:48:55 +0000 (0:00:00.450) 0:00:00.753 ********** 2025-06-03 15:52:04.540997 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-03 15:52:04.541004 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-03 15:52:04.541011 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-03 15:52:04.541024 | orchestrator | 2025-06-03 15:52:04.541031 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-03 15:52:04.541037 | orchestrator | 2025-06-03 15:52:04.541044 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-03 15:52:04.541051 | orchestrator | Tuesday 03 June 2025 15:48:56 +0000 (0:00:00.457) 0:00:01.211 ********** 2025-06-03 15:52:04.541058 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:52:04.541065 | orchestrator | 2025-06-03 15:52:04.541071 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-03 15:52:04.541080 | orchestrator | Tuesday 03 June 2025 15:48:57 +0000 (0:00:01.069) 0:00:02.281 ********** 2025-06-03 15:52:04.541084 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-03 15:52:04.541088 | orchestrator | 2025-06-03 15:52:04.541093 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-03 15:52:04.541097 | orchestrator | Tuesday 03 June 2025 15:49:01 +0000 (0:00:03.853) 0:00:06.135 ********** 2025-06-03 15:52:04.541120 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-03 15:52:04.541125 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-03 15:52:04.541129 | orchestrator | 2025-06-03 15:52:04.541133 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-03 15:52:04.541137 | orchestrator | Tuesday 03 June 2025 15:49:07 +0000 (0:00:06.752) 0:00:12.887 ********** 2025-06-03 15:52:04.541141 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:52:04.541145 | orchestrator | 2025-06-03 15:52:04.541149 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-03 15:52:04.541153 | orchestrator | Tuesday 03 June 2025 15:49:11 +0000 (0:00:03.604) 0:00:16.491 ********** 2025-06-03 15:52:04.541157 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:52:04.541161 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-03 15:52:04.541165 | orchestrator | 2025-06-03 15:52:04.541169 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-03 15:52:04.541172 | orchestrator | Tuesday 03 June 2025 15:49:15 +0000 (0:00:04.211) 0:00:20.702 ********** 2025-06-03 15:52:04.541176 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:52:04.541180 | orchestrator | 2025-06-03 15:52:04.541184 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-03 15:52:04.541188 | orchestrator | Tuesday 03 June 2025 15:49:19 +0000 (0:00:03.972) 0:00:24.675 ********** 2025-06-03 15:52:04.541191 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-03 15:52:04.541195 | orchestrator | 2025-06-03 15:52:04.541199 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-03 15:52:04.541202 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:04.551) 0:00:29.226 ********** 2025-06-03 15:52:04.541209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.541239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.541245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.541264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541340 | orchestrator | 2025-06-03 15:52:04.541344 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-03 15:52:04.541348 | orchestrator | Tuesday 03 June 2025 15:49:28 +0000 (0:00:04.288) 0:00:33.515 ********** 2025-06-03 15:52:04.541352 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:04.541356 | orchestrator | 2025-06-03 15:52:04.541391 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-03 15:52:04.541395 | orchestrator | Tuesday 03 June 2025 15:49:28 +0000 (0:00:00.303) 0:00:33.819 ********** 2025-06-03 15:52:04.541399 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:04.541403 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:04.541406 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:04.541410 | orchestrator | 2025-06-03 15:52:04.541420 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-03 15:52:04.541424 | orchestrator | Tuesday 03 June 2025 15:49:29 +0000 (0:00:00.744) 0:00:34.564 ********** 2025-06-03 15:52:04.541428 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:52:04.541432 | orchestrator | 2025-06-03 15:52:04.541436 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-03 15:52:04.541440 | orchestrator | Tuesday 03 June 2025 15:49:30 +0000 (0:00:01.100) 0:00:35.665 ********** 2025-06-03 15:52:04.541449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.541456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.541461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.541465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.541568 | orchestrator | 2025-06-03 15:52:04.541576 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-03 15:52:04.541580 | orchestrator | Tuesday 03 June 2025 15:49:37 +0000 (0:00:06.905) 0:00:42.570 ********** 2025-06-03 15:52:04.541590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.541597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:52:04.541601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541620 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:04.541624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.541784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:52:04.541801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541830 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:04.541834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.541847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:52:04.541851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541889 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:04.541894 | orchestrator | 2025-06-03 15:52:04.541898 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-03 15:52:04.541902 | orchestrator | Tuesday 03 June 2025 15:49:39 +0000 (0:00:02.105) 0:00:44.676 ********** 2025-06-03 15:52:04.541906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.541915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:52:04.541919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541938 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:04.541942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.541952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:52:04.541959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.541992 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:04.541999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.542074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:52:04.542085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.542092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.542098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.542111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.542117 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:04.542123 | orchestrator | 2025-06-03 15:52:04.542128 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-03 15:52:04.542135 | orchestrator | Tuesday 03 June 2025 15:49:42 +0000 (0:00:02.366) 0:00:47.042 ********** 2025-06-03 15:52:04.542141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.542155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.542162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.542168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.542988 | orchestrator | 2025-06-03 15:52:04.542995 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-03 15:52:04.543002 | orchestrator | Tuesday 03 June 2025 15:49:48 +0000 (0:00:06.258) 0:00:53.300 ********** 2025-06-03 15:52:04.543009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.543023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.543038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.543049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543167 | orchestrator | 2025-06-03 15:52:04.543172 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-03 15:52:04.543178 | orchestrator | Tuesday 03 June 2025 15:50:08 +0000 (0:00:20.538) 0:01:13.839 ********** 2025-06-03 15:52:04.543183 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-03 15:52:04.543190 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-03 15:52:04.543195 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-03 15:52:04.543202 | orchestrator | 2025-06-03 15:52:04.543208 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-03 15:52:04.543214 | orchestrator | Tuesday 03 June 2025 15:50:13 +0000 (0:00:04.723) 0:01:18.563 ********** 2025-06-03 15:52:04.543219 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-03 15:52:04.543225 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-03 15:52:04.543230 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-03 15:52:04.543236 | orchestrator | 2025-06-03 15:52:04.543241 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-03 15:52:04.543247 | orchestrator | Tuesday 03 June 2025 15:50:16 +0000 (0:00:03.360) 0:01:21.923 ********** 2025-06-03 15:52:04.543256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.543273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.543280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.543285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543466 | orchestrator | 2025-06-03 15:52:04.543472 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-03 15:52:04.543478 | orchestrator | Tuesday 03 June 2025 15:50:19 +0000 (0:00:03.069) 0:01:24.993 ********** 2025-06-03 15:52:04.543488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.543504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.543512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.543518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.543893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.543911 | orchestrator | 2025-06-03 15:52:04.543918 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-03 15:52:04.543930 | orchestrator | Tuesday 03 June 2025 15:50:23 +0000 (0:00:03.173) 0:01:28.167 ********** 2025-06-03 15:52:04.543936 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:04.543943 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:04.543948 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:04.543954 | orchestrator | 2025-06-03 15:52:04.543960 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-03 15:52:04.543966 | orchestrator | Tuesday 03 June 2025 15:50:23 +0000 (0:00:00.562) 0:01:28.730 ********** 2025-06-03 15:52:04.543977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.543990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:52:04.543996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544026 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:04.544033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.544044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:52:04.544092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:52:04.544099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:52:04.544106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:52:04.544175 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:04.544181 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:04.544187 | orchestrator | 2025-06-03 15:52:04.544193 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-03 15:52:04.544199 | orchestrator | Tuesday 03 June 2025 15:50:24 +0000 (0:00:00.920) 0:01:29.650 ********** 2025-06-03 15:52:04.544211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.544222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.544229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:52:04.544235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:52:04.544355 | orchestrator | 2025-06-03 15:52:04.544361 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-03 15:52:04.544367 | orchestrator | Tuesday 03 June 2025 15:50:29 +0000 (0:00:04.890) 0:01:34.540 ********** 2025-06-03 15:52:04.544374 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:04.544380 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:04.544386 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:04.544392 | orchestrator | 2025-06-03 15:52:04.544398 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-03 15:52:04.544404 | orchestrator | Tuesday 03 June 2025 15:50:29 +0000 (0:00:00.306) 0:01:34.846 ********** 2025-06-03 15:52:04.544411 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-03 15:52:04.544418 | orchestrator | 2025-06-03 15:52:04.544424 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-03 15:52:04.544430 | orchestrator | Tuesday 03 June 2025 15:50:33 +0000 (0:00:03.193) 0:01:38.040 ********** 2025-06-03 15:52:04.544436 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:52:04.544442 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-03 15:52:04.544448 | orchestrator | 2025-06-03 15:52:04.544458 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-03 15:52:04.544464 | orchestrator | Tuesday 03 June 2025 15:50:35 +0000 (0:00:02.634) 0:01:40.675 ********** 2025-06-03 15:52:04.544471 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:04.544477 | orchestrator | 2025-06-03 15:52:04.544484 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-03 15:52:04.544490 | orchestrator | Tuesday 03 June 2025 15:50:53 +0000 (0:00:17.479) 0:01:58.155 ********** 2025-06-03 15:52:04.544496 | orchestrator | 2025-06-03 15:52:04.544502 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-03 15:52:04.544508 | orchestrator | Tuesday 03 June 2025 15:50:53 +0000 (0:00:00.157) 0:01:58.312 ********** 2025-06-03 15:52:04.544514 | orchestrator | 2025-06-03 15:52:04.544520 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-03 15:52:04.544526 | orchestrator | Tuesday 03 June 2025 15:50:53 +0000 (0:00:00.151) 0:01:58.464 ********** 2025-06-03 15:52:04.544532 | orchestrator | 2025-06-03 15:52:04.544538 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-03 15:52:04.544548 | orchestrator | Tuesday 03 June 2025 15:50:53 +0000 (0:00:00.162) 0:01:58.627 ********** 2025-06-03 15:52:04.544555 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:04.544561 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:04.544572 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:04.544578 | orchestrator | 2025-06-03 15:52:04.544584 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-03 15:52:04.544590 | orchestrator | Tuesday 03 June 2025 15:51:09 +0000 (0:00:15.891) 0:02:14.518 ********** 2025-06-03 15:52:04.544596 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:04.544603 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:04.544609 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:04.544615 | orchestrator | 2025-06-03 15:52:04.544621 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-03 15:52:04.544627 | orchestrator | Tuesday 03 June 2025 15:51:20 +0000 (0:00:11.440) 0:02:25.959 ********** 2025-06-03 15:52:04.544633 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:04.544640 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:04.544646 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:04.544652 | orchestrator | 2025-06-03 15:52:04.544658 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-03 15:52:04.544664 | orchestrator | Tuesday 03 June 2025 15:51:31 +0000 (0:00:10.371) 0:02:36.330 ********** 2025-06-03 15:52:04.544670 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:04.544676 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:04.544683 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:04.544689 | orchestrator | 2025-06-03 15:52:04.544695 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-03 15:52:04.544701 | orchestrator | Tuesday 03 June 2025 15:51:43 +0000 (0:00:12.405) 0:02:48.735 ********** 2025-06-03 15:52:04.544707 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:04.544713 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:04.544718 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:04.544725 | orchestrator | 2025-06-03 15:52:04.544729 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-03 15:52:04.544733 | orchestrator | Tuesday 03 June 2025 15:51:48 +0000 (0:00:05.270) 0:02:54.006 ********** 2025-06-03 15:52:04.544736 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:04.544740 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:04.544744 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:04.544748 | orchestrator | 2025-06-03 15:52:04.544752 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-03 15:52:04.544755 | orchestrator | Tuesday 03 June 2025 15:51:54 +0000 (0:00:05.784) 0:02:59.790 ********** 2025-06-03 15:52:04.544759 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:04.544763 | orchestrator | 2025-06-03 15:52:04.544767 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:52:04.544771 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:52:04.544777 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:52:04.544781 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:52:04.544784 | orchestrator | 2025-06-03 15:52:04.544788 | orchestrator | 2025-06-03 15:52:04.544792 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:52:04.544796 | orchestrator | Tuesday 03 June 2025 15:52:02 +0000 (0:00:07.991) 0:03:07.782 ********** 2025-06-03 15:52:04.544800 | orchestrator | =============================================================================== 2025-06-03 15:52:04.544803 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.54s 2025-06-03 15:52:04.544807 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.48s 2025-06-03 15:52:04.544811 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 15.89s 2025-06-03 15:52:04.544818 | orchestrator | designate : Restart designate-producer container ----------------------- 12.41s 2025-06-03 15:52:04.544822 | orchestrator | designate : Restart designate-api container ---------------------------- 11.44s 2025-06-03 15:52:04.544825 | orchestrator | designate : Restart designate-central container ------------------------ 10.37s 2025-06-03 15:52:04.544829 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.99s 2025-06-03 15:52:04.544833 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.91s 2025-06-03 15:52:04.544837 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.75s 2025-06-03 15:52:04.544840 | orchestrator | designate : Copying over config.json files for services ----------------- 6.26s 2025-06-03 15:52:04.544847 | orchestrator | designate : Restart designate-worker container -------------------------- 5.78s 2025-06-03 15:52:04.544851 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.27s 2025-06-03 15:52:04.544855 | orchestrator | designate : Check designate containers ---------------------------------- 4.89s 2025-06-03 15:52:04.544859 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.72s 2025-06-03 15:52:04.544862 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.55s 2025-06-03 15:52:04.544866 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.29s 2025-06-03 15:52:04.544925 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.21s 2025-06-03 15:52:04.544930 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.97s 2025-06-03 15:52:04.544934 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.85s 2025-06-03 15:52:04.544938 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.60s 2025-06-03 15:52:04.544944 | orchestrator | 2025-06-03 15:52:04 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state STARTED 2025-06-03 15:52:04.544948 | orchestrator | 2025-06-03 15:52:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:07.600052 | orchestrator | 2025-06-03 15:52:07 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:07.601302 | orchestrator | 2025-06-03 15:52:07 | INFO  | Task d5569bb5-c7e8-4d0e-b794-ca112af23416 is in state STARTED 2025-06-03 15:52:07.603777 | orchestrator | 2025-06-03 15:52:07 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:07.605453 | orchestrator | 2025-06-03 15:52:07 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:07.610202 | orchestrator | 2025-06-03 15:52:07 | INFO  | Task 40733156-9f63-49b6-9d20-02942f367f7a is in state SUCCESS 2025-06-03 15:52:07.611472 | orchestrator | 2025-06-03 15:52:07.611501 | orchestrator | 2025-06-03 15:52:07.611506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:52:07.611512 | orchestrator | 2025-06-03 15:52:07.611518 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:52:07.611524 | orchestrator | Tuesday 03 June 2025 15:50:52 +0000 (0:00:00.800) 0:00:00.800 ********** 2025-06-03 15:52:07.611529 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:52:07.611535 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:52:07.611540 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:52:07.611545 | orchestrator | 2025-06-03 15:52:07.611552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:52:07.611558 | orchestrator | Tuesday 03 June 2025 15:50:53 +0000 (0:00:00.768) 0:00:01.569 ********** 2025-06-03 15:52:07.611565 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-03 15:52:07.611572 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-03 15:52:07.611578 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-03 15:52:07.611584 | orchestrator | 2025-06-03 15:52:07.611591 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-03 15:52:07.611625 | orchestrator | 2025-06-03 15:52:07.611632 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-03 15:52:07.611639 | orchestrator | Tuesday 03 June 2025 15:50:54 +0000 (0:00:01.176) 0:00:02.745 ********** 2025-06-03 15:52:07.611646 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:52:07.611655 | orchestrator | 2025-06-03 15:52:07.611661 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-03 15:52:07.611668 | orchestrator | Tuesday 03 June 2025 15:50:55 +0000 (0:00:01.733) 0:00:04.479 ********** 2025-06-03 15:52:07.611676 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-03 15:52:07.611682 | orchestrator | 2025-06-03 15:52:07.611690 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-03 15:52:07.611697 | orchestrator | Tuesday 03 June 2025 15:50:59 +0000 (0:00:03.976) 0:00:08.456 ********** 2025-06-03 15:52:07.611703 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-03 15:52:07.611710 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-03 15:52:07.611716 | orchestrator | 2025-06-03 15:52:07.611722 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-03 15:52:07.611728 | orchestrator | Tuesday 03 June 2025 15:51:07 +0000 (0:00:07.249) 0:00:15.706 ********** 2025-06-03 15:52:07.611733 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:52:07.611737 | orchestrator | 2025-06-03 15:52:07.611741 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-03 15:52:07.611745 | orchestrator | Tuesday 03 June 2025 15:51:10 +0000 (0:00:03.691) 0:00:19.397 ********** 2025-06-03 15:52:07.611748 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:52:07.611752 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-03 15:52:07.611756 | orchestrator | 2025-06-03 15:52:07.611760 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-03 15:52:07.611765 | orchestrator | Tuesday 03 June 2025 15:51:14 +0000 (0:00:04.078) 0:00:23.475 ********** 2025-06-03 15:52:07.611771 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:52:07.611777 | orchestrator | 2025-06-03 15:52:07.611783 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-03 15:52:07.611802 | orchestrator | Tuesday 03 June 2025 15:51:18 +0000 (0:00:03.699) 0:00:27.175 ********** 2025-06-03 15:52:07.611809 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-03 15:52:07.611816 | orchestrator | 2025-06-03 15:52:07.611822 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-03 15:52:07.611828 | orchestrator | Tuesday 03 June 2025 15:51:22 +0000 (0:00:04.214) 0:00:31.389 ********** 2025-06-03 15:52:07.611834 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:07.611840 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:07.611843 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:07.611847 | orchestrator | 2025-06-03 15:52:07.611851 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-03 15:52:07.611855 | orchestrator | Tuesday 03 June 2025 15:51:23 +0000 (0:00:00.367) 0:00:31.757 ********** 2025-06-03 15:52:07.611862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.611968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.611973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.611978 | orchestrator | 2025-06-03 15:52:07.611981 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-03 15:52:07.611985 | orchestrator | Tuesday 03 June 2025 15:51:24 +0000 (0:00:01.023) 0:00:32.780 ********** 2025-06-03 15:52:07.611989 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:07.611993 | orchestrator | 2025-06-03 15:52:07.611997 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-03 15:52:07.612001 | orchestrator | Tuesday 03 June 2025 15:51:24 +0000 (0:00:00.141) 0:00:32.922 ********** 2025-06-03 15:52:07.612004 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:07.612008 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:07.612012 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:07.612016 | orchestrator | 2025-06-03 15:52:07.612019 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-03 15:52:07.612023 | orchestrator | Tuesday 03 June 2025 15:51:24 +0000 (0:00:00.458) 0:00:33.380 ********** 2025-06-03 15:52:07.612027 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:52:07.612031 | orchestrator | 2025-06-03 15:52:07.612040 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-03 15:52:07.612044 | orchestrator | Tuesday 03 June 2025 15:51:25 +0000 (0:00:01.173) 0:00:34.553 ********** 2025-06-03 15:52:07.612048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612070 | orchestrator | 2025-06-03 15:52:07.612090 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-03 15:52:07.612094 | orchestrator | Tuesday 03 June 2025 15:51:27 +0000 (0:00:01.979) 0:00:36.532 ********** 2025-06-03 15:52:07.612098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:52:07.612102 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:07.612109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:52:07.612117 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:07.612126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:52:07.612130 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:07.612134 | orchestrator | 2025-06-03 15:52:07.612137 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-03 15:52:07.612141 | orchestrator | Tuesday 03 June 2025 15:51:29 +0000 (0:00:01.100) 0:00:37.633 ********** 2025-06-03 15:52:07.612145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:52:07.612149 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:07.612153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:52:07.612157 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:07.612164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:52:07.612171 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:07.612175 | orchestrator | 2025-06-03 15:52:07.612179 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-03 15:52:07.612183 | orchestrator | Tuesday 03 June 2025 15:51:30 +0000 (0:00:00.935) 0:00:38.568 ********** 2025-06-03 15:52:07.612192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612205 | orchestrator | 2025-06-03 15:52:07.612209 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-03 15:52:07.612212 | orchestrator | Tuesday 03 June 2025 15:51:31 +0000 (0:00:01.599) 0:00:40.167 ********** 2025-06-03 15:52:07.612222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612239 | orchestrator | 2025-06-03 15:52:07.612243 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-03 15:52:07.612247 | orchestrator | Tuesday 03 June 2025 15:51:35 +0000 (0:00:03.844) 0:00:44.012 ********** 2025-06-03 15:52:07.612253 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-03 15:52:07.612260 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-03 15:52:07.612266 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-03 15:52:07.612272 | orchestrator | 2025-06-03 15:52:07.612279 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-03 15:52:07.612285 | orchestrator | Tuesday 03 June 2025 15:51:36 +0000 (0:00:01.438) 0:00:45.450 ********** 2025-06-03 15:52:07.612291 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:07.612296 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:07.612303 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:07.612308 | orchestrator | 2025-06-03 15:52:07.612314 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-03 15:52:07.612325 | orchestrator | Tuesday 03 June 2025 15:51:38 +0000 (0:00:01.458) 0:00:46.909 ********** 2025-06-03 15:52:07.612334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:52:07.612342 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:07.612347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:52:07.612354 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:07.612365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:52:07.612372 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:07.612379 | orchestrator | 2025-06-03 15:52:07.612386 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-03 15:52:07.612393 | orchestrator | Tuesday 03 June 2025 15:51:38 +0000 (0:00:00.639) 0:00:47.549 ********** 2025-06-03 15:52:07.612399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:52:07.612423 | orchestrator | 2025-06-03 15:52:07.612427 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-03 15:52:07.612431 | orchestrator | Tuesday 03 June 2025 15:51:40 +0000 (0:00:01.822) 0:00:49.372 ********** 2025-06-03 15:52:07.612434 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:07.612438 | orchestrator | 2025-06-03 15:52:07.612442 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-03 15:52:07.612446 | orchestrator | Tuesday 03 June 2025 15:51:43 +0000 (0:00:02.416) 0:00:51.788 ********** 2025-06-03 15:52:07.612449 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:07.612453 | orchestrator | 2025-06-03 15:52:07.612457 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-03 15:52:07.612461 | orchestrator | Tuesday 03 June 2025 15:51:45 +0000 (0:00:02.183) 0:00:53.972 ********** 2025-06-03 15:52:07.612467 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:07.612471 | orchestrator | 2025-06-03 15:52:07.612475 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-03 15:52:07.612479 | orchestrator | Tuesday 03 June 2025 15:51:57 +0000 (0:00:12.402) 0:01:06.374 ********** 2025-06-03 15:52:07.612483 | orchestrator | 2025-06-03 15:52:07.612488 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-03 15:52:07.612494 | orchestrator | Tuesday 03 June 2025 15:51:57 +0000 (0:00:00.153) 0:01:06.528 ********** 2025-06-03 15:52:07.612499 | orchestrator | 2025-06-03 15:52:07.612503 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-03 15:52:07.612507 | orchestrator | Tuesday 03 June 2025 15:51:58 +0000 (0:00:00.134) 0:01:06.662 ********** 2025-06-03 15:52:07.612510 | orchestrator | 2025-06-03 15:52:07.612514 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-03 15:52:07.612518 | orchestrator | Tuesday 03 June 2025 15:51:58 +0000 (0:00:00.085) 0:01:06.748 ********** 2025-06-03 15:52:07.612525 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:07.612529 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:07.612533 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:07.612537 | orchestrator | 2025-06-03 15:52:07.612541 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:52:07.612546 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:52:07.612552 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:52:07.612556 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:52:07.612559 | orchestrator | 2025-06-03 15:52:07.612563 | orchestrator | 2025-06-03 15:52:07.612567 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:52:07.612571 | orchestrator | Tuesday 03 June 2025 15:52:05 +0000 (0:00:07.292) 0:01:14.041 ********** 2025-06-03 15:52:07.612575 | orchestrator | =============================================================================== 2025-06-03 15:52:07.612578 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.40s 2025-06-03 15:52:07.612582 | orchestrator | placement : Restart placement-api container ----------------------------- 7.29s 2025-06-03 15:52:07.612586 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.25s 2025-06-03 15:52:07.612589 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.21s 2025-06-03 15:52:07.612593 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.08s 2025-06-03 15:52:07.612597 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.98s 2025-06-03 15:52:07.612601 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.84s 2025-06-03 15:52:07.612605 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.70s 2025-06-03 15:52:07.612608 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.69s 2025-06-03 15:52:07.612612 | orchestrator | placement : Creating placement databases -------------------------------- 2.42s 2025-06-03 15:52:07.612616 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.18s 2025-06-03 15:52:07.612622 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.98s 2025-06-03 15:52:07.612626 | orchestrator | placement : Check placement containers ---------------------------------- 1.82s 2025-06-03 15:52:07.612630 | orchestrator | placement : include_tasks ----------------------------------------------- 1.73s 2025-06-03 15:52:07.612634 | orchestrator | placement : Copying over config.json files for services ----------------- 1.60s 2025-06-03 15:52:07.612638 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.46s 2025-06-03 15:52:07.612641 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.44s 2025-06-03 15:52:07.612645 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.18s 2025-06-03 15:52:07.612649 | orchestrator | placement : include_tasks ----------------------------------------------- 1.17s 2025-06-03 15:52:07.612653 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.10s 2025-06-03 15:52:07.612656 | orchestrator | 2025-06-03 15:52:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:10.660647 | orchestrator | 2025-06-03 15:52:10 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:10.666313 | orchestrator | 2025-06-03 15:52:10 | INFO  | Task d5569bb5-c7e8-4d0e-b794-ca112af23416 is in state STARTED 2025-06-03 15:52:10.670228 | orchestrator | 2025-06-03 15:52:10 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:10.676407 | orchestrator | 2025-06-03 15:52:10 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:10.676502 | orchestrator | 2025-06-03 15:52:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:13.718722 | orchestrator | 2025-06-03 15:52:13 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:13.720734 | orchestrator | 2025-06-03 15:52:13 | INFO  | Task d5569bb5-c7e8-4d0e-b794-ca112af23416 is in state SUCCESS 2025-06-03 15:52:13.723774 | orchestrator | 2025-06-03 15:52:13 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:13.727719 | orchestrator | 2025-06-03 15:52:13 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:13.731606 | orchestrator | 2025-06-03 15:52:13 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:13.731686 | orchestrator | 2025-06-03 15:52:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:16.774523 | orchestrator | 2025-06-03 15:52:16 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:16.775602 | orchestrator | 2025-06-03 15:52:16 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:16.777696 | orchestrator | 2025-06-03 15:52:16 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:16.779445 | orchestrator | 2025-06-03 15:52:16 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:16.779607 | orchestrator | 2025-06-03 15:52:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:19.810990 | orchestrator | 2025-06-03 15:52:19 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:19.812615 | orchestrator | 2025-06-03 15:52:19 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:19.814063 | orchestrator | 2025-06-03 15:52:19 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:19.814969 | orchestrator | 2025-06-03 15:52:19 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:19.815013 | orchestrator | 2025-06-03 15:52:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:22.853365 | orchestrator | 2025-06-03 15:52:22 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:22.856427 | orchestrator | 2025-06-03 15:52:22 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:22.857580 | orchestrator | 2025-06-03 15:52:22 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:22.859216 | orchestrator | 2025-06-03 15:52:22 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:22.859277 | orchestrator | 2025-06-03 15:52:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:25.892723 | orchestrator | 2025-06-03 15:52:25 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:25.893209 | orchestrator | 2025-06-03 15:52:25 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:25.894180 | orchestrator | 2025-06-03 15:52:25 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:25.894676 | orchestrator | 2025-06-03 15:52:25 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:25.894712 | orchestrator | 2025-06-03 15:52:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:28.928260 | orchestrator | 2025-06-03 15:52:28 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:28.929114 | orchestrator | 2025-06-03 15:52:28 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:28.930496 | orchestrator | 2025-06-03 15:52:28 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:28.932114 | orchestrator | 2025-06-03 15:52:28 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:28.932140 | orchestrator | 2025-06-03 15:52:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:31.971069 | orchestrator | 2025-06-03 15:52:31 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:31.972041 | orchestrator | 2025-06-03 15:52:31 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:31.973191 | orchestrator | 2025-06-03 15:52:31 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:31.974562 | orchestrator | 2025-06-03 15:52:31 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:31.974615 | orchestrator | 2025-06-03 15:52:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:35.009867 | orchestrator | 2025-06-03 15:52:35 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:35.010893 | orchestrator | 2025-06-03 15:52:35 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:35.011131 | orchestrator | 2025-06-03 15:52:35 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:35.012342 | orchestrator | 2025-06-03 15:52:35 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:35.012385 | orchestrator | 2025-06-03 15:52:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:38.047285 | orchestrator | 2025-06-03 15:52:38 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:38.047994 | orchestrator | 2025-06-03 15:52:38 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:38.049173 | orchestrator | 2025-06-03 15:52:38 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:38.050341 | orchestrator | 2025-06-03 15:52:38 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:38.050568 | orchestrator | 2025-06-03 15:52:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:41.082950 | orchestrator | 2025-06-03 15:52:41 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:41.084129 | orchestrator | 2025-06-03 15:52:41 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:41.086578 | orchestrator | 2025-06-03 15:52:41 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:41.088400 | orchestrator | 2025-06-03 15:52:41 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:41.088439 | orchestrator | 2025-06-03 15:52:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:44.127342 | orchestrator | 2025-06-03 15:52:44 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:44.128276 | orchestrator | 2025-06-03 15:52:44 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:44.129676 | orchestrator | 2025-06-03 15:52:44 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:44.130808 | orchestrator | 2025-06-03 15:52:44 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:44.130963 | orchestrator | 2025-06-03 15:52:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:47.176589 | orchestrator | 2025-06-03 15:52:47 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:47.177996 | orchestrator | 2025-06-03 15:52:47 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:47.178157 | orchestrator | 2025-06-03 15:52:47 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:47.179033 | orchestrator | 2025-06-03 15:52:47 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:47.179084 | orchestrator | 2025-06-03 15:52:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:50.207656 | orchestrator | 2025-06-03 15:52:50 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:50.208240 | orchestrator | 2025-06-03 15:52:50 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:50.210212 | orchestrator | 2025-06-03 15:52:50 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:50.211062 | orchestrator | 2025-06-03 15:52:50 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:50.211118 | orchestrator | 2025-06-03 15:52:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:53.242149 | orchestrator | 2025-06-03 15:52:53 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:53.242589 | orchestrator | 2025-06-03 15:52:53 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:53.243383 | orchestrator | 2025-06-03 15:52:53 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:53.245328 | orchestrator | 2025-06-03 15:52:53 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:53.245362 | orchestrator | 2025-06-03 15:52:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:56.290873 | orchestrator | 2025-06-03 15:52:56 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:56.292497 | orchestrator | 2025-06-03 15:52:56 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:56.294675 | orchestrator | 2025-06-03 15:52:56 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:56.295409 | orchestrator | 2025-06-03 15:52:56 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:56.295434 | orchestrator | 2025-06-03 15:52:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:59.328538 | orchestrator | 2025-06-03 15:52:59 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:52:59.329228 | orchestrator | 2025-06-03 15:52:59 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:52:59.329534 | orchestrator | 2025-06-03 15:52:59 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:52:59.330314 | orchestrator | 2025-06-03 15:52:59 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:52:59.330350 | orchestrator | 2025-06-03 15:52:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:02.371692 | orchestrator | 2025-06-03 15:53:02 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:02.372092 | orchestrator | 2025-06-03 15:53:02 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:02.373019 | orchestrator | 2025-06-03 15:53:02 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:02.373693 | orchestrator | 2025-06-03 15:53:02 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:02.373713 | orchestrator | 2025-06-03 15:53:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:05.405239 | orchestrator | 2025-06-03 15:53:05 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:05.406371 | orchestrator | 2025-06-03 15:53:05 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:05.407938 | orchestrator | 2025-06-03 15:53:05 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:05.409244 | orchestrator | 2025-06-03 15:53:05 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:05.409278 | orchestrator | 2025-06-03 15:53:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:08.455764 | orchestrator | 2025-06-03 15:53:08 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:08.456300 | orchestrator | 2025-06-03 15:53:08 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:08.457688 | orchestrator | 2025-06-03 15:53:08 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:08.459337 | orchestrator | 2025-06-03 15:53:08 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:08.459405 | orchestrator | 2025-06-03 15:53:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:11.510974 | orchestrator | 2025-06-03 15:53:11 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:11.513087 | orchestrator | 2025-06-03 15:53:11 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:11.514926 | orchestrator | 2025-06-03 15:53:11 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:11.517528 | orchestrator | 2025-06-03 15:53:11 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:11.517600 | orchestrator | 2025-06-03 15:53:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:14.554900 | orchestrator | 2025-06-03 15:53:14 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:14.554974 | orchestrator | 2025-06-03 15:53:14 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:14.555715 | orchestrator | 2025-06-03 15:53:14 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:14.556977 | orchestrator | 2025-06-03 15:53:14 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:14.557349 | orchestrator | 2025-06-03 15:53:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:17.614754 | orchestrator | 2025-06-03 15:53:17 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:17.616672 | orchestrator | 2025-06-03 15:53:17 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:17.619771 | orchestrator | 2025-06-03 15:53:17 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:17.622890 | orchestrator | 2025-06-03 15:53:17 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:17.622953 | orchestrator | 2025-06-03 15:53:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:20.667104 | orchestrator | 2025-06-03 15:53:20 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:20.672522 | orchestrator | 2025-06-03 15:53:20 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:20.674501 | orchestrator | 2025-06-03 15:53:20 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:20.677818 | orchestrator | 2025-06-03 15:53:20 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:20.677875 | orchestrator | 2025-06-03 15:53:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:23.716519 | orchestrator | 2025-06-03 15:53:23 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:23.718196 | orchestrator | 2025-06-03 15:53:23 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:23.720018 | orchestrator | 2025-06-03 15:53:23 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:23.721754 | orchestrator | 2025-06-03 15:53:23 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:23.721835 | orchestrator | 2025-06-03 15:53:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:26.767295 | orchestrator | 2025-06-03 15:53:26 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:26.771619 | orchestrator | 2025-06-03 15:53:26 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:26.773346 | orchestrator | 2025-06-03 15:53:26 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:26.775434 | orchestrator | 2025-06-03 15:53:26 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:26.775501 | orchestrator | 2025-06-03 15:53:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:29.818902 | orchestrator | 2025-06-03 15:53:29 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:29.821391 | orchestrator | 2025-06-03 15:53:29 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:29.823305 | orchestrator | 2025-06-03 15:53:29 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:29.824957 | orchestrator | 2025-06-03 15:53:29 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:29.825003 | orchestrator | 2025-06-03 15:53:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:32.881467 | orchestrator | 2025-06-03 15:53:32 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:32.883021 | orchestrator | 2025-06-03 15:53:32 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:32.887057 | orchestrator | 2025-06-03 15:53:32 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:32.889193 | orchestrator | 2025-06-03 15:53:32 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:32.889240 | orchestrator | 2025-06-03 15:53:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:35.937049 | orchestrator | 2025-06-03 15:53:35 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:35.938005 | orchestrator | 2025-06-03 15:53:35 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:35.938088 | orchestrator | 2025-06-03 15:53:35 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:35.938634 | orchestrator | 2025-06-03 15:53:35 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:35.938815 | orchestrator | 2025-06-03 15:53:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:38.973620 | orchestrator | 2025-06-03 15:53:38 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:38.973936 | orchestrator | 2025-06-03 15:53:38 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:38.975114 | orchestrator | 2025-06-03 15:53:38 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:38.975812 | orchestrator | 2025-06-03 15:53:38 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:38.975852 | orchestrator | 2025-06-03 15:53:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:42.007838 | orchestrator | 2025-06-03 15:53:42 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:42.008181 | orchestrator | 2025-06-03 15:53:42 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:42.009127 | orchestrator | 2025-06-03 15:53:42 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:42.009687 | orchestrator | 2025-06-03 15:53:42 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:42.011092 | orchestrator | 2025-06-03 15:53:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:45.047887 | orchestrator | 2025-06-03 15:53:45 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:45.048510 | orchestrator | 2025-06-03 15:53:45 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:45.049097 | orchestrator | 2025-06-03 15:53:45 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:45.049956 | orchestrator | 2025-06-03 15:53:45 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:45.049985 | orchestrator | 2025-06-03 15:53:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:48.090739 | orchestrator | 2025-06-03 15:53:48 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:48.094565 | orchestrator | 2025-06-03 15:53:48 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:48.096673 | orchestrator | 2025-06-03 15:53:48 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:48.098213 | orchestrator | 2025-06-03 15:53:48 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:48.098262 | orchestrator | 2025-06-03 15:53:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:51.140588 | orchestrator | 2025-06-03 15:53:51 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:51.140683 | orchestrator | 2025-06-03 15:53:51 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:51.141849 | orchestrator | 2025-06-03 15:53:51 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:51.143644 | orchestrator | 2025-06-03 15:53:51 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:51.143711 | orchestrator | 2025-06-03 15:53:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:54.185262 | orchestrator | 2025-06-03 15:53:54 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:54.185817 | orchestrator | 2025-06-03 15:53:54 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:54.188724 | orchestrator | 2025-06-03 15:53:54 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:54.190549 | orchestrator | 2025-06-03 15:53:54 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:54.190592 | orchestrator | 2025-06-03 15:53:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:57.226637 | orchestrator | 2025-06-03 15:53:57 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state STARTED 2025-06-03 15:53:57.226713 | orchestrator | 2025-06-03 15:53:57 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:53:57.229991 | orchestrator | 2025-06-03 15:53:57 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:53:57.231415 | orchestrator | 2025-06-03 15:53:57 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:53:57.231471 | orchestrator | 2025-06-03 15:53:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:00.276010 | orchestrator | 2025-06-03 15:54:00.276098 | orchestrator | 2025-06-03 15:54:00.276107 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:54:00.276115 | orchestrator | 2025-06-03 15:54:00.276122 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:54:00.276129 | orchestrator | Tuesday 03 June 2025 15:52:10 +0000 (0:00:00.211) 0:00:00.211 ********** 2025-06-03 15:54:00.276135 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:00.276143 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:54:00.276149 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:54:00.276155 | orchestrator | 2025-06-03 15:54:00.276161 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:54:00.276167 | orchestrator | Tuesday 03 June 2025 15:52:10 +0000 (0:00:00.326) 0:00:00.538 ********** 2025-06-03 15:54:00.276173 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-03 15:54:00.276180 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-03 15:54:00.276186 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-03 15:54:00.276192 | orchestrator | 2025-06-03 15:54:00.276199 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-03 15:54:00.276205 | orchestrator | 2025-06-03 15:54:00.276212 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-03 15:54:00.276218 | orchestrator | Tuesday 03 June 2025 15:52:11 +0000 (0:00:00.824) 0:00:01.362 ********** 2025-06-03 15:54:00.276224 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:00.276230 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:54:00.276236 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:54:00.276243 | orchestrator | 2025-06-03 15:54:00.276249 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:54:00.276256 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:54:00.276265 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:54:00.276272 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:54:00.276278 | orchestrator | 2025-06-03 15:54:00.276284 | orchestrator | 2025-06-03 15:54:00.276313 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:54:00.276320 | orchestrator | Tuesday 03 June 2025 15:52:12 +0000 (0:00:00.803) 0:00:02.166 ********** 2025-06-03 15:54:00.276326 | orchestrator | =============================================================================== 2025-06-03 15:54:00.276333 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-06-03 15:54:00.276340 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.80s 2025-06-03 15:54:00.276347 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-03 15:54:00.276378 | orchestrator | 2025-06-03 15:54:00.276385 | orchestrator | 2025-06-03 15:54:00.276414 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:54:00.276421 | orchestrator | 2025-06-03 15:54:00.276428 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:54:00.276435 | orchestrator | Tuesday 03 June 2025 15:52:00 +0000 (0:00:00.506) 0:00:00.506 ********** 2025-06-03 15:54:00.276441 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:00.276447 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:54:00.276462 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:54:00.276478 | orchestrator | 2025-06-03 15:54:00.276484 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:54:00.276491 | orchestrator | Tuesday 03 June 2025 15:52:01 +0000 (0:00:00.336) 0:00:00.843 ********** 2025-06-03 15:54:00.276496 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-03 15:54:00.276503 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-03 15:54:00.276509 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-03 15:54:00.276515 | orchestrator | 2025-06-03 15:54:00.276522 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-03 15:54:00.276528 | orchestrator | 2025-06-03 15:54:00.276535 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-03 15:54:00.276554 | orchestrator | Tuesday 03 June 2025 15:52:01 +0000 (0:00:00.400) 0:00:01.244 ********** 2025-06-03 15:54:00.276562 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:00.276568 | orchestrator | 2025-06-03 15:54:00.276576 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-03 15:54:00.276582 | orchestrator | Tuesday 03 June 2025 15:52:02 +0000 (0:00:00.629) 0:00:01.874 ********** 2025-06-03 15:54:00.276590 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-03 15:54:00.276597 | orchestrator | 2025-06-03 15:54:00.276604 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-03 15:54:00.276609 | orchestrator | Tuesday 03 June 2025 15:52:06 +0000 (0:00:04.202) 0:00:06.076 ********** 2025-06-03 15:54:00.276613 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-03 15:54:00.276618 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-03 15:54:00.276623 | orchestrator | 2025-06-03 15:54:00.276627 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-03 15:54:00.276632 | orchestrator | Tuesday 03 June 2025 15:52:14 +0000 (0:00:07.656) 0:00:13.732 ********** 2025-06-03 15:54:00.276636 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:54:00.276641 | orchestrator | 2025-06-03 15:54:00.276646 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-03 15:54:00.276651 | orchestrator | Tuesday 03 June 2025 15:52:17 +0000 (0:00:03.691) 0:00:17.424 ********** 2025-06-03 15:54:00.276666 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:54:00.276672 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-03 15:54:00.276676 | orchestrator | 2025-06-03 15:54:00.276680 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-03 15:54:00.276685 | orchestrator | Tuesday 03 June 2025 15:52:22 +0000 (0:00:04.517) 0:00:21.942 ********** 2025-06-03 15:54:00.276689 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:54:00.276694 | orchestrator | 2025-06-03 15:54:00.276698 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-03 15:54:00.276702 | orchestrator | Tuesday 03 June 2025 15:52:26 +0000 (0:00:04.036) 0:00:25.978 ********** 2025-06-03 15:54:00.276707 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-03 15:54:00.276711 | orchestrator | 2025-06-03 15:54:00.276715 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-03 15:54:00.276726 | orchestrator | Tuesday 03 June 2025 15:52:30 +0000 (0:00:04.398) 0:00:30.377 ********** 2025-06-03 15:54:00.276731 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:00.276735 | orchestrator | 2025-06-03 15:54:00.276755 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-03 15:54:00.276760 | orchestrator | Tuesday 03 June 2025 15:52:34 +0000 (0:00:03.542) 0:00:33.919 ********** 2025-06-03 15:54:00.276764 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:00.276768 | orchestrator | 2025-06-03 15:54:00.276773 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-03 15:54:00.276777 | orchestrator | Tuesday 03 June 2025 15:52:38 +0000 (0:00:04.266) 0:00:38.186 ********** 2025-06-03 15:54:00.276782 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:00.276786 | orchestrator | 2025-06-03 15:54:00.276791 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-03 15:54:00.276795 | orchestrator | Tuesday 03 June 2025 15:52:42 +0000 (0:00:04.030) 0:00:42.216 ********** 2025-06-03 15:54:00.276803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.276814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.276820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.276828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.276837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.276842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.276846 | orchestrator | 2025-06-03 15:54:00.276849 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-03 15:54:00.276853 | orchestrator | Tuesday 03 June 2025 15:52:43 +0000 (0:00:01.413) 0:00:43.629 ********** 2025-06-03 15:54:00.276857 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:00.276861 | orchestrator | 2025-06-03 15:54:00.276865 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-03 15:54:00.276868 | orchestrator | Tuesday 03 June 2025 15:52:44 +0000 (0:00:00.134) 0:00:43.764 ********** 2025-06-03 15:54:00.276872 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:00.276876 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:00.276880 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:00.276884 | orchestrator | 2025-06-03 15:54:00.276887 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-03 15:54:00.276891 | orchestrator | Tuesday 03 June 2025 15:52:44 +0000 (0:00:00.617) 0:00:44.381 ********** 2025-06-03 15:54:00.276895 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:54:00.276899 | orchestrator | 2025-06-03 15:54:00.276905 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-03 15:54:00.276909 | orchestrator | Tuesday 03 June 2025 15:52:45 +0000 (0:00:00.998) 0:00:45.380 ********** 2025-06-03 15:54:00.276913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.276923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.276928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.276932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.276938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.276943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.276950 | orchestrator | 2025-06-03 15:54:00.276954 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-03 15:54:00.276960 | orchestrator | Tuesday 03 June 2025 15:52:48 +0000 (0:00:03.130) 0:00:48.510 ********** 2025-06-03 15:54:00.276964 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:00.276968 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:54:00.276972 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:54:00.276976 | orchestrator | 2025-06-03 15:54:00.276980 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-03 15:54:00.276983 | orchestrator | Tuesday 03 June 2025 15:52:49 +0000 (0:00:00.432) 0:00:48.942 ********** 2025-06-03 15:54:00.276988 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:00.276992 | orchestrator | 2025-06-03 15:54:00.276995 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-03 15:54:00.276999 | orchestrator | Tuesday 03 June 2025 15:52:50 +0000 (0:00:00.810) 0:00:49.753 ********** 2025-06-03 15:54:00.277003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277039 | orchestrator | 2025-06-03 15:54:00.277043 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-03 15:54:00.277047 | orchestrator | Tuesday 03 June 2025 15:52:52 +0000 (0:00:02.924) 0:00:52.678 ********** 2025-06-03 15:54:00.277052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:54:00.277058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:00.277065 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:00.277072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:54:00.277076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:00.277080 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:00.277084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:54:00.277088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:00.277100 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:00.277104 | orchestrator | 2025-06-03 15:54:00.277107 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-03 15:54:00.277111 | orchestrator | Tuesday 03 June 2025 15:52:53 +0000 (0:00:00.758) 0:00:53.437 ********** 2025-06-03 15:54:00.277118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:54:00.277127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:00.277131 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:00.277135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:54:00.277139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:00.277143 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:00.277149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:54:00.277156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:00.277160 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:00.277164 | orchestrator | 2025-06-03 15:54:00.277168 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-03 15:54:00.277172 | orchestrator | Tuesday 03 June 2025 15:52:55 +0000 (0:00:01.405) 0:00:54.842 ********** 2025-06-03 15:54:00.277180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'conta2025-06-03 15:54:00 | INFO  | Task f8b06b65-de80-4273-9952-d7119afa9973 is in state SUCCESS 2025-06-03 15:54:00.277327 | orchestrator | iner_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277369 | orchestrator | 2025-06-03 15:54:00.277373 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-03 15:54:00.277377 | orchestrator | Tuesday 03 June 2025 15:52:57 +0000 (0:00:02.564) 0:00:57.406 ********** 2025-06-03 15:54:00.277382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277435 | orchestrator | 2025-06-03 15:54:00.277441 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-03 15:54:00.277447 | orchestrator | Tuesday 03 June 2025 15:53:02 +0000 (0:00:04.658) 0:01:02.064 ********** 2025-06-03 15:54:00.277456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:54:00.277462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:00.277468 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:00.277479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:54:00.277485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:00.277495 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:00.277501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:54:00.277510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:00.277517 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:00.277524 | orchestrator | 2025-06-03 15:54:00.277528 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-03 15:54:00.277532 | orchestrator | Tuesday 03 June 2025 15:53:03 +0000 (0:00:00.789) 0:01:02.854 ********** 2025-06-03 15:54:00.277538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:54:00.277555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:00.277572 | orchestrator | 2025-06-03 15:54:00.277576 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-03 15:54:00.277580 | orchestrator | Tuesday 03 June 2025 15:53:05 +0000 (0:00:02.035) 0:01:04.890 ********** 2025-06-03 15:54:00.277584 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:00.277587 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:00.277591 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:00.277595 | orchestrator | 2025-06-03 15:54:00.277599 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-03 15:54:00.277602 | orchestrator | Tuesday 03 June 2025 15:53:05 +0000 (0:00:00.301) 0:01:05.192 ********** 2025-06-03 15:54:00.277606 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:00.277610 | orchestrator | 2025-06-03 15:54:00.277614 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-03 15:54:00.277621 | orchestrator | Tuesday 03 June 2025 15:53:07 +0000 (0:00:02.386) 0:01:07.578 ********** 2025-06-03 15:54:00.277624 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:00.277628 | orchestrator | 2025-06-03 15:54:00.277632 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-03 15:54:00.277636 | orchestrator | Tuesday 03 June 2025 15:53:10 +0000 (0:00:02.400) 0:01:09.978 ********** 2025-06-03 15:54:00.277639 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:00.277643 | orchestrator | 2025-06-03 15:54:00.277647 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-03 15:54:00.277651 | orchestrator | Tuesday 03 June 2025 15:53:30 +0000 (0:00:20.490) 0:01:30.469 ********** 2025-06-03 15:54:00.277654 | orchestrator | 2025-06-03 15:54:00.277658 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-03 15:54:00.277662 | orchestrator | Tuesday 03 June 2025 15:53:30 +0000 (0:00:00.100) 0:01:30.569 ********** 2025-06-03 15:54:00.277666 | orchestrator | 2025-06-03 15:54:00.277670 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-03 15:54:00.277673 | orchestrator | Tuesday 03 June 2025 15:53:30 +0000 (0:00:00.065) 0:01:30.635 ********** 2025-06-03 15:54:00.277677 | orchestrator | 2025-06-03 15:54:00.277681 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-03 15:54:00.277685 | orchestrator | Tuesday 03 June 2025 15:53:30 +0000 (0:00:00.065) 0:01:30.700 ********** 2025-06-03 15:54:00.277688 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:00.277692 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:00.277696 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:00.277699 | orchestrator | 2025-06-03 15:54:00.277703 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-03 15:54:00.277707 | orchestrator | Tuesday 03 June 2025 15:53:49 +0000 (0:00:18.336) 0:01:49.037 ********** 2025-06-03 15:54:00.277711 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:00.277715 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:00.277718 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:00.277722 | orchestrator | 2025-06-03 15:54:00.277726 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:54:00.277730 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:54:00.277735 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:54:00.277778 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:54:00.277782 | orchestrator | 2025-06-03 15:54:00.277786 | orchestrator | 2025-06-03 15:54:00.277790 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:54:00.277794 | orchestrator | Tuesday 03 June 2025 15:53:58 +0000 (0:00:09.298) 0:01:58.335 ********** 2025-06-03 15:54:00.277797 | orchestrator | =============================================================================== 2025-06-03 15:54:00.277806 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 20.49s 2025-06-03 15:54:00.277810 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.34s 2025-06-03 15:54:00.277814 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.30s 2025-06-03 15:54:00.277820 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.66s 2025-06-03 15:54:00.277826 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.66s 2025-06-03 15:54:00.277832 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.52s 2025-06-03 15:54:00.277838 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.40s 2025-06-03 15:54:00.277843 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.27s 2025-06-03 15:54:00.277854 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.20s 2025-06-03 15:54:00.277860 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.04s 2025-06-03 15:54:00.277866 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.03s 2025-06-03 15:54:00.277871 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.69s 2025-06-03 15:54:00.277877 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.54s 2025-06-03 15:54:00.277884 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.13s 2025-06-03 15:54:00.277890 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.92s 2025-06-03 15:54:00.277896 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.56s 2025-06-03 15:54:00.277906 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.40s 2025-06-03 15:54:00.277913 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.39s 2025-06-03 15:54:00.277918 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.04s 2025-06-03 15:54:00.277922 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.41s 2025-06-03 15:54:00.277926 | orchestrator | 2025-06-03 15:54:00 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:54:00.277931 | orchestrator | 2025-06-03 15:54:00 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:00.279883 | orchestrator | 2025-06-03 15:54:00 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:00.280446 | orchestrator | 2025-06-03 15:54:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:03.325123 | orchestrator | 2025-06-03 15:54:03 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:54:03.328614 | orchestrator | 2025-06-03 15:54:03 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:03.330764 | orchestrator | 2025-06-03 15:54:03 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:03.330824 | orchestrator | 2025-06-03 15:54:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:06.381611 | orchestrator | 2025-06-03 15:54:06 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:54:06.383431 | orchestrator | 2025-06-03 15:54:06 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:06.385566 | orchestrator | 2025-06-03 15:54:06 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:06.385649 | orchestrator | 2025-06-03 15:54:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:09.429805 | orchestrator | 2025-06-03 15:54:09 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:54:09.432387 | orchestrator | 2025-06-03 15:54:09 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:09.434688 | orchestrator | 2025-06-03 15:54:09 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:09.434799 | orchestrator | 2025-06-03 15:54:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:12.472684 | orchestrator | 2025-06-03 15:54:12 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:54:12.473152 | orchestrator | 2025-06-03 15:54:12 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:12.473929 | orchestrator | 2025-06-03 15:54:12 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:12.473987 | orchestrator | 2025-06-03 15:54:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:15.509828 | orchestrator | 2025-06-03 15:54:15 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:54:15.510312 | orchestrator | 2025-06-03 15:54:15 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:15.511019 | orchestrator | 2025-06-03 15:54:15 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:15.511053 | orchestrator | 2025-06-03 15:54:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:18.553114 | orchestrator | 2025-06-03 15:54:18 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:54:18.555074 | orchestrator | 2025-06-03 15:54:18 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:18.556511 | orchestrator | 2025-06-03 15:54:18 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:18.556900 | orchestrator | 2025-06-03 15:54:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:21.603878 | orchestrator | 2025-06-03 15:54:21 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state STARTED 2025-06-03 15:54:21.606007 | orchestrator | 2025-06-03 15:54:21 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:21.607333 | orchestrator | 2025-06-03 15:54:21 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:21.607593 | orchestrator | 2025-06-03 15:54:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:24.658315 | orchestrator | 2025-06-03 15:54:24 | INFO  | Task cf9a4b4c-8861-464c-983a-69b09f045773 is in state SUCCESS 2025-06-03 15:54:24.660835 | orchestrator | 2025-06-03 15:54:24.660924 | orchestrator | 2025-06-03 15:54:24.660973 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:54:24.660986 | orchestrator | 2025-06-03 15:54:24.660998 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-03 15:54:24.661006 | orchestrator | Tuesday 03 June 2025 15:44:58 +0000 (0:00:00.308) 0:00:00.308 ********** 2025-06-03 15:54:24.661013 | orchestrator | changed: [testbed-manager] 2025-06-03 15:54:24.661022 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.661031 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:24.661043 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:24.661053 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.661066 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.661075 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.661085 | orchestrator | 2025-06-03 15:54:24.661092 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:54:24.661098 | orchestrator | Tuesday 03 June 2025 15:44:59 +0000 (0:00:00.856) 0:00:01.165 ********** 2025-06-03 15:54:24.661105 | orchestrator | changed: [testbed-manager] 2025-06-03 15:54:24.661115 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.661242 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:24.661251 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:24.661257 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.661263 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.661270 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.661276 | orchestrator | 2025-06-03 15:54:24.661282 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:54:24.661288 | orchestrator | Tuesday 03 June 2025 15:44:59 +0000 (0:00:00.676) 0:00:01.842 ********** 2025-06-03 15:54:24.661294 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-03 15:54:24.661301 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-03 15:54:24.661307 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-03 15:54:24.661341 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-03 15:54:24.661348 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-03 15:54:24.661355 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-03 15:54:24.661362 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-03 15:54:24.661369 | orchestrator | 2025-06-03 15:54:24.661376 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-03 15:54:24.661383 | orchestrator | 2025-06-03 15:54:24.661390 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-03 15:54:24.661398 | orchestrator | Tuesday 03 June 2025 15:45:00 +0000 (0:00:01.096) 0:00:02.938 ********** 2025-06-03 15:54:24.661406 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:24.661412 | orchestrator | 2025-06-03 15:54:24.661420 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-03 15:54:24.661427 | orchestrator | Tuesday 03 June 2025 15:45:02 +0000 (0:00:01.732) 0:00:04.671 ********** 2025-06-03 15:54:24.661435 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-03 15:54:24.661443 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-03 15:54:24.661450 | orchestrator | 2025-06-03 15:54:24.661458 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-03 15:54:24.661465 | orchestrator | Tuesday 03 June 2025 15:45:07 +0000 (0:00:04.651) 0:00:09.322 ********** 2025-06-03 15:54:24.661473 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:54:24.661489 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:54:24.661495 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.661502 | orchestrator | 2025-06-03 15:54:24.661510 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-03 15:54:24.661517 | orchestrator | Tuesday 03 June 2025 15:45:11 +0000 (0:00:03.954) 0:00:13.276 ********** 2025-06-03 15:54:24.661523 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.661531 | orchestrator | 2025-06-03 15:54:24.661538 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-03 15:54:24.661545 | orchestrator | Tuesday 03 June 2025 15:45:12 +0000 (0:00:00.802) 0:00:14.079 ********** 2025-06-03 15:54:24.661565 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.661572 | orchestrator | 2025-06-03 15:54:24.661579 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-03 15:54:24.661586 | orchestrator | Tuesday 03 June 2025 15:45:13 +0000 (0:00:01.699) 0:00:15.778 ********** 2025-06-03 15:54:24.661593 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.661600 | orchestrator | 2025-06-03 15:54:24.661607 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-03 15:54:24.661614 | orchestrator | Tuesday 03 June 2025 15:45:17 +0000 (0:00:03.224) 0:00:19.003 ********** 2025-06-03 15:54:24.661621 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.661627 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.661633 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.661639 | orchestrator | 2025-06-03 15:54:24.661645 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-03 15:54:24.661651 | orchestrator | Tuesday 03 June 2025 15:45:17 +0000 (0:00:00.337) 0:00:19.340 ********** 2025-06-03 15:54:24.661657 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:24.661663 | orchestrator | 2025-06-03 15:54:24.661669 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-03 15:54:24.661676 | orchestrator | Tuesday 03 June 2025 15:45:48 +0000 (0:00:31.429) 0:00:50.770 ********** 2025-06-03 15:54:24.661681 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.661688 | orchestrator | 2025-06-03 15:54:24.661695 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-03 15:54:24.661702 | orchestrator | Tuesday 03 June 2025 15:46:04 +0000 (0:00:15.834) 0:01:06.604 ********** 2025-06-03 15:54:24.661725 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:24.661740 | orchestrator | 2025-06-03 15:54:24.661763 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-03 15:54:24.661770 | orchestrator | Tuesday 03 June 2025 15:46:21 +0000 (0:00:16.416) 0:01:23.021 ********** 2025-06-03 15:54:24.661792 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:24.661799 | orchestrator | 2025-06-03 15:54:24.661826 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-03 15:54:24.661833 | orchestrator | Tuesday 03 June 2025 15:46:22 +0000 (0:00:01.332) 0:01:24.354 ********** 2025-06-03 15:54:24.661840 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.661868 | orchestrator | 2025-06-03 15:54:24.661874 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-03 15:54:24.661881 | orchestrator | Tuesday 03 June 2025 15:46:22 +0000 (0:00:00.436) 0:01:24.790 ********** 2025-06-03 15:54:24.661889 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:24.661896 | orchestrator | 2025-06-03 15:54:24.661903 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-03 15:54:24.661909 | orchestrator | Tuesday 03 June 2025 15:46:23 +0000 (0:00:00.486) 0:01:25.277 ********** 2025-06-03 15:54:24.661916 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:24.661923 | orchestrator | 2025-06-03 15:54:24.661929 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-03 15:54:24.661945 | orchestrator | Tuesday 03 June 2025 15:46:43 +0000 (0:00:19.957) 0:01:45.234 ********** 2025-06-03 15:54:24.661951 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.661958 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.661964 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.661981 | orchestrator | 2025-06-03 15:54:24.661987 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-03 15:54:24.661994 | orchestrator | 2025-06-03 15:54:24.662000 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-03 15:54:24.662006 | orchestrator | Tuesday 03 June 2025 15:46:43 +0000 (0:00:00.341) 0:01:45.576 ********** 2025-06-03 15:54:24.662053 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:24.662061 | orchestrator | 2025-06-03 15:54:24.662067 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-03 15:54:24.662073 | orchestrator | Tuesday 03 June 2025 15:46:44 +0000 (0:00:00.749) 0:01:46.326 ********** 2025-06-03 15:54:24.662079 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662085 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662091 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.662096 | orchestrator | 2025-06-03 15:54:24.662101 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-03 15:54:24.662107 | orchestrator | Tuesday 03 June 2025 15:46:46 +0000 (0:00:02.245) 0:01:48.572 ********** 2025-06-03 15:54:24.662114 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662121 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662127 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.662133 | orchestrator | 2025-06-03 15:54:24.662139 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-03 15:54:24.662144 | orchestrator | Tuesday 03 June 2025 15:46:48 +0000 (0:00:02.373) 0:01:50.945 ********** 2025-06-03 15:54:24.662150 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.662156 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662163 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662169 | orchestrator | 2025-06-03 15:54:24.662174 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-03 15:54:24.662180 | orchestrator | Tuesday 03 June 2025 15:46:49 +0000 (0:00:00.366) 0:01:51.311 ********** 2025-06-03 15:54:24.662187 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-03 15:54:24.662193 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662199 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-03 15:54:24.662211 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662218 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-03 15:54:24.662223 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-03 15:54:24.662229 | orchestrator | 2025-06-03 15:54:24.662235 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-03 15:54:24.662241 | orchestrator | Tuesday 03 June 2025 15:46:58 +0000 (0:00:09.648) 0:02:00.960 ********** 2025-06-03 15:54:24.662247 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.662253 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662265 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662271 | orchestrator | 2025-06-03 15:54:24.662278 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-03 15:54:24.662283 | orchestrator | Tuesday 03 June 2025 15:46:59 +0000 (0:00:00.432) 0:02:01.393 ********** 2025-06-03 15:54:24.662289 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-03 15:54:24.662295 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.662301 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-03 15:54:24.662308 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662314 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-03 15:54:24.662320 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662326 | orchestrator | 2025-06-03 15:54:24.662331 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-03 15:54:24.662338 | orchestrator | Tuesday 03 June 2025 15:47:00 +0000 (0:00:00.745) 0:02:02.138 ********** 2025-06-03 15:54:24.662344 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662350 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.662355 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662362 | orchestrator | 2025-06-03 15:54:24.662368 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-03 15:54:24.662374 | orchestrator | Tuesday 03 June 2025 15:47:00 +0000 (0:00:00.556) 0:02:02.694 ********** 2025-06-03 15:54:24.662380 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662385 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662392 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.662398 | orchestrator | 2025-06-03 15:54:24.662404 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-03 15:54:24.662410 | orchestrator | Tuesday 03 June 2025 15:47:01 +0000 (0:00:01.063) 0:02:03.757 ********** 2025-06-03 15:54:24.662416 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662422 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662437 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.662444 | orchestrator | 2025-06-03 15:54:24.662450 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-03 15:54:24.662456 | orchestrator | Tuesday 03 June 2025 15:47:04 +0000 (0:00:02.277) 0:02:06.035 ********** 2025-06-03 15:54:24.662462 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662468 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662474 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:24.662481 | orchestrator | 2025-06-03 15:54:24.662487 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-03 15:54:24.662492 | orchestrator | Tuesday 03 June 2025 15:47:25 +0000 (0:00:21.416) 0:02:27.452 ********** 2025-06-03 15:54:24.662498 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662505 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662510 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:24.662516 | orchestrator | 2025-06-03 15:54:24.662523 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-03 15:54:24.662529 | orchestrator | Tuesday 03 June 2025 15:47:38 +0000 (0:00:12.630) 0:02:40.082 ********** 2025-06-03 15:54:24.662535 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:24.662541 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662547 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662558 | orchestrator | 2025-06-03 15:54:24.662564 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-03 15:54:24.662570 | orchestrator | Tuesday 03 June 2025 15:47:39 +0000 (0:00:00.892) 0:02:40.975 ********** 2025-06-03 15:54:24.662577 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662582 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662588 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.662594 | orchestrator | 2025-06-03 15:54:24.662599 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-03 15:54:24.662605 | orchestrator | Tuesday 03 June 2025 15:47:50 +0000 (0:00:11.626) 0:02:52.601 ********** 2025-06-03 15:54:24.662611 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.662616 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662622 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662628 | orchestrator | 2025-06-03 15:54:24.662634 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-03 15:54:24.662640 | orchestrator | Tuesday 03 June 2025 15:47:52 +0000 (0:00:01.563) 0:02:54.165 ********** 2025-06-03 15:54:24.662646 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.662651 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.662658 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.662664 | orchestrator | 2025-06-03 15:54:24.662670 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-03 15:54:24.662676 | orchestrator | 2025-06-03 15:54:24.662682 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-03 15:54:24.662687 | orchestrator | Tuesday 03 June 2025 15:47:52 +0000 (0:00:00.315) 0:02:54.480 ********** 2025-06-03 15:54:24.662693 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:24.662701 | orchestrator | 2025-06-03 15:54:24.662730 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-03 15:54:24.662738 | orchestrator | Tuesday 03 June 2025 15:47:53 +0000 (0:00:00.499) 0:02:54.980 ********** 2025-06-03 15:54:24.662744 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-03 15:54:24.662750 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-03 15:54:24.662756 | orchestrator | 2025-06-03 15:54:24.662761 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-03 15:54:24.662766 | orchestrator | Tuesday 03 June 2025 15:47:56 +0000 (0:00:03.437) 0:02:58.417 ********** 2025-06-03 15:54:24.662772 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-03 15:54:24.662780 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-03 15:54:24.662792 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-03 15:54:24.662797 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-03 15:54:24.662804 | orchestrator | 2025-06-03 15:54:24.662810 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-03 15:54:24.662817 | orchestrator | Tuesday 03 June 2025 15:48:03 +0000 (0:00:07.193) 0:03:05.611 ********** 2025-06-03 15:54:24.662823 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:54:24.662829 | orchestrator | 2025-06-03 15:54:24.662835 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-03 15:54:24.662841 | orchestrator | Tuesday 03 June 2025 15:48:06 +0000 (0:00:03.177) 0:03:08.788 ********** 2025-06-03 15:54:24.662847 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:54:24.662853 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-03 15:54:24.662860 | orchestrator | 2025-06-03 15:54:24.662866 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-03 15:54:24.662878 | orchestrator | Tuesday 03 June 2025 15:48:10 +0000 (0:00:04.140) 0:03:12.929 ********** 2025-06-03 15:54:24.662885 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:54:24.662891 | orchestrator | 2025-06-03 15:54:24.662896 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-03 15:54:24.662903 | orchestrator | Tuesday 03 June 2025 15:48:14 +0000 (0:00:03.589) 0:03:16.518 ********** 2025-06-03 15:54:24.662908 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-03 15:54:24.662915 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-03 15:54:24.662921 | orchestrator | 2025-06-03 15:54:24.662927 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-03 15:54:24.663010 | orchestrator | Tuesday 03 June 2025 15:48:22 +0000 (0:00:08.376) 0:03:24.895 ********** 2025-06-03 15:54:24.663021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663112 | orchestrator | 2025-06-03 15:54:24.663119 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-03 15:54:24.663125 | orchestrator | Tuesday 03 June 2025 15:48:24 +0000 (0:00:01.839) 0:03:26.734 ********** 2025-06-03 15:54:24.663131 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.663137 | orchestrator | 2025-06-03 15:54:24.663143 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-03 15:54:24.663148 | orchestrator | Tuesday 03 June 2025 15:48:24 +0000 (0:00:00.133) 0:03:26.868 ********** 2025-06-03 15:54:24.663154 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.663160 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.663166 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.663172 | orchestrator | 2025-06-03 15:54:24.663177 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-03 15:54:24.663184 | orchestrator | Tuesday 03 June 2025 15:48:25 +0000 (0:00:00.804) 0:03:27.673 ********** 2025-06-03 15:54:24.663190 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:54:24.663196 | orchestrator | 2025-06-03 15:54:24.663202 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-03 15:54:24.663208 | orchestrator | Tuesday 03 June 2025 15:48:26 +0000 (0:00:01.264) 0:03:28.937 ********** 2025-06-03 15:54:24.663214 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.663221 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.663228 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.663233 | orchestrator | 2025-06-03 15:54:24.663244 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-03 15:54:24.663249 | orchestrator | Tuesday 03 June 2025 15:48:27 +0000 (0:00:00.514) 0:03:29.451 ********** 2025-06-03 15:54:24.663256 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:24.663262 | orchestrator | 2025-06-03 15:54:24.663268 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-03 15:54:24.663277 | orchestrator | Tuesday 03 June 2025 15:48:28 +0000 (0:00:00.685) 0:03:30.137 ********** 2025-06-03 15:54:24.663288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663342 | orchestrator | 2025-06-03 15:54:24.663348 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-03 15:54:24.663354 | orchestrator | Tuesday 03 June 2025 15:48:30 +0000 (0:00:02.586) 0:03:32.723 ********** 2025-06-03 15:54:24.663361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:54:24.663367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.663383 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.663392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:54:24.663403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.663409 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.663415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:54:24.663422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.663434 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.663440 | orchestrator | 2025-06-03 15:54:24.663446 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-03 15:54:24.663452 | orchestrator | Tuesday 03 June 2025 15:48:31 +0000 (0:00:00.754) 0:03:33.478 ********** 2025-06-03 15:54:24.663461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:54:24.663467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.663474 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.663484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:54:24.663491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.663501 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.663510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:54:24.663517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.663523 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.663529 | orchestrator | 2025-06-03 15:54:24.663535 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-03 15:54:24.663541 | orchestrator | Tuesday 03 June 2025 15:48:33 +0000 (0:00:02.122) 0:03:35.601 ********** 2025-06-03 15:54:24.663551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663607 | orchestrator | 2025-06-03 15:54:24.663613 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-03 15:54:24.663619 | orchestrator | Tuesday 03 June 2025 15:48:37 +0000 (0:00:03.503) 0:03:39.104 ********** 2025-06-03 15:54:24.663625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663676 | orchestrator | 2025-06-03 15:54:24.663685 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-03 15:54:24.663692 | orchestrator | Tuesday 03 June 2025 15:48:47 +0000 (0:00:09.948) 0:03:49.053 ********** 2025-06-03 15:54:24.663701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:54:24.663756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.663763 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.663774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:54:24.663780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.663785 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.663795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:54:24.663806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.663813 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.663818 | orchestrator | 2025-06-03 15:54:24.663824 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-03 15:54:24.663836 | orchestrator | Tuesday 03 June 2025 15:48:47 +0000 (0:00:00.548) 0:03:49.602 ********** 2025-06-03 15:54:24.663842 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.663848 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:24.663854 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:24.663859 | orchestrator | 2025-06-03 15:54:24.663865 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-03 15:54:24.663871 | orchestrator | Tuesday 03 June 2025 15:48:50 +0000 (0:00:02.973) 0:03:52.576 ********** 2025-06-03 15:54:24.663876 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.663882 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.663888 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.663894 | orchestrator | 2025-06-03 15:54:24.663900 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-03 15:54:24.663906 | orchestrator | Tuesday 03 June 2025 15:48:51 +0000 (0:00:00.514) 0:03:53.090 ********** 2025-06-03 15:54:24.663912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:54:24.663949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.663969 | orchestrator | 2025-06-03 15:54:24.663975 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-03 15:54:24.663981 | orchestrator | Tuesday 03 June 2025 15:48:52 +0000 (0:00:01.703) 0:03:54.795 ********** 2025-06-03 15:54:24.663987 | orchestrator | 2025-06-03 15:54:24.663993 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-03 15:54:24.664002 | orchestrator | Tuesday 03 June 2025 15:48:53 +0000 (0:00:00.345) 0:03:55.140 ********** 2025-06-03 15:54:24.664008 | orchestrator | 2025-06-03 15:54:24.664014 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-03 15:54:24.664020 | orchestrator | Tuesday 03 June 2025 15:48:53 +0000 (0:00:00.387) 0:03:55.528 ********** 2025-06-03 15:54:24.664026 | orchestrator | 2025-06-03 15:54:24.664032 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-03 15:54:24.664038 | orchestrator | Tuesday 03 June 2025 15:48:54 +0000 (0:00:00.606) 0:03:56.134 ********** 2025-06-03 15:54:24.664044 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.664050 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:24.664056 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:24.664062 | orchestrator | 2025-06-03 15:54:24.664067 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-03 15:54:24.664073 | orchestrator | Tuesday 03 June 2025 15:49:18 +0000 (0:00:24.215) 0:04:20.349 ********** 2025-06-03 15:54:24.664080 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.664089 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:24.664095 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:24.664101 | orchestrator | 2025-06-03 15:54:24.664106 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-03 15:54:24.664112 | orchestrator | 2025-06-03 15:54:24.664118 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-03 15:54:24.664124 | orchestrator | Tuesday 03 June 2025 15:49:30 +0000 (0:00:11.842) 0:04:32.192 ********** 2025-06-03 15:54:24.664130 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:24.664137 | orchestrator | 2025-06-03 15:54:24.664146 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-03 15:54:24.664152 | orchestrator | Tuesday 03 June 2025 15:49:32 +0000 (0:00:02.248) 0:04:34.440 ********** 2025-06-03 15:54:24.664158 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.664164 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.664170 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.664176 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.664182 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.664188 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.664194 | orchestrator | 2025-06-03 15:54:24.664199 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-03 15:54:24.664205 | orchestrator | Tuesday 03 June 2025 15:49:33 +0000 (0:00:00.834) 0:04:35.275 ********** 2025-06-03 15:54:24.664212 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.664218 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.664224 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.664230 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:54:24.664237 | orchestrator | 2025-06-03 15:54:24.664243 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-03 15:54:24.664249 | orchestrator | Tuesday 03 June 2025 15:49:34 +0000 (0:00:01.293) 0:04:36.568 ********** 2025-06-03 15:54:24.664257 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-03 15:54:24.664263 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-03 15:54:24.664269 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-03 15:54:24.664275 | orchestrator | 2025-06-03 15:54:24.664281 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-03 15:54:24.664287 | orchestrator | Tuesday 03 June 2025 15:49:35 +0000 (0:00:01.025) 0:04:37.594 ********** 2025-06-03 15:54:24.664293 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-03 15:54:24.664298 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-03 15:54:24.664304 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-03 15:54:24.664310 | orchestrator | 2025-06-03 15:54:24.664315 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-03 15:54:24.664321 | orchestrator | Tuesday 03 June 2025 15:49:37 +0000 (0:00:01.578) 0:04:39.173 ********** 2025-06-03 15:54:24.664327 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-03 15:54:24.664333 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.664339 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-03 15:54:24.664346 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.664352 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-03 15:54:24.664358 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.664365 | orchestrator | 2025-06-03 15:54:24.664371 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-03 15:54:24.664377 | orchestrator | Tuesday 03 June 2025 15:49:39 +0000 (0:00:01.844) 0:04:41.017 ********** 2025-06-03 15:54:24.664384 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:54:24.664390 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:54:24.664402 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.664408 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:54:24.664413 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:54:24.664419 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.664424 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-03 15:54:24.664430 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-03 15:54:24.664436 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:54:24.664442 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-03 15:54:24.664447 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:54:24.664456 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.664462 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-03 15:54:24.664468 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-03 15:54:24.664473 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-03 15:54:24.664479 | orchestrator | 2025-06-03 15:54:24.664484 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-03 15:54:24.664490 | orchestrator | Tuesday 03 June 2025 15:49:40 +0000 (0:00:01.287) 0:04:42.304 ********** 2025-06-03 15:54:24.664495 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.664501 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.664506 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.664512 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.664517 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.664523 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.664528 | orchestrator | 2025-06-03 15:54:24.664534 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-03 15:54:24.664539 | orchestrator | Tuesday 03 June 2025 15:49:41 +0000 (0:00:01.601) 0:04:43.906 ********** 2025-06-03 15:54:24.664545 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.664550 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.664556 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.664561 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.664567 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.664573 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.664578 | orchestrator | 2025-06-03 15:54:24.664584 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-03 15:54:24.664590 | orchestrator | Tuesday 03 June 2025 15:49:43 +0000 (0:00:01.911) 0:04:45.817 ********** 2025-06-03 15:54:24.664603 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664633 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664702 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.664783 | orchestrator | 2025-06-03 15:54:24.664789 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-03 15:54:24.664795 | orchestrator | Tuesday 03 June 2025 15:49:47 +0000 (0:00:03.549) 0:04:49.367 ********** 2025-06-03 15:54:24.664801 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:24.664808 | orchestrator | 2025-06-03 15:54:24.664813 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-03 15:54:24.664819 | orchestrator | Tuesday 03 June 2025 15:49:49 +0000 (0:00:02.434) 0:04:51.801 ********** 2025-06-03 15:54:24.664829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665066 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665090 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.665199 | orchestrator | 2025-06-03 15:54:24.665205 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-03 15:54:24.665210 | orchestrator | Tuesday 03 June 2025 15:49:55 +0000 (0:00:05.506) 0:04:57.307 ********** 2025-06-03 15:54:24.665216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:54:24.665222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665229 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.665239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.665247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.665261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665268 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.665274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.665281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.665289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665295 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.665302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.665315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.665321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665326 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.665332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:54:24.665338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665344 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.665353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:54:24.665359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665369 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.665375 | orchestrator | 2025-06-03 15:54:24.665380 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-03 15:54:24.665386 | orchestrator | Tuesday 03 June 2025 15:49:58 +0000 (0:00:02.668) 0:04:59.975 ********** 2025-06-03 15:54:24.665395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.665401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.665407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665413 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.665422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.665428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.665445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665450 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.665456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.665462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.665468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665474 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.665483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:54:24.665495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:54:24.665505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665511 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.665517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:54:24.665528 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.665534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.665540 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.665546 | orchestrator | 2025-06-03 15:54:24.665551 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-03 15:54:24.665557 | orchestrator | Tuesday 03 June 2025 15:50:00 +0000 (0:00:02.930) 0:05:02.906 ********** 2025-06-03 15:54:24.665563 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.665568 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.665578 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.665584 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:54:24.665590 | orchestrator | 2025-06-03 15:54:24.665595 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-03 15:54:24.665604 | orchestrator | Tuesday 03 June 2025 15:50:01 +0000 (0:00:00.764) 0:05:03.670 ********** 2025-06-03 15:54:24.665609 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:54:24.665616 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-03 15:54:24.665622 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-03 15:54:24.665627 | orchestrator | 2025-06-03 15:54:24.665632 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-03 15:54:24.665638 | orchestrator | Tuesday 03 June 2025 15:50:03 +0000 (0:00:01.351) 0:05:05.022 ********** 2025-06-03 15:54:24.665644 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:54:24.665649 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-03 15:54:24.665656 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-03 15:54:24.665662 | orchestrator | 2025-06-03 15:54:24.665669 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-03 15:54:24.665675 | orchestrator | Tuesday 03 June 2025 15:50:04 +0000 (0:00:01.208) 0:05:06.230 ********** 2025-06-03 15:54:24.665681 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:54:24.665687 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:54:24.665693 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:54:24.665699 | orchestrator | 2025-06-03 15:54:24.665705 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-03 15:54:24.665732 | orchestrator | Tuesday 03 June 2025 15:50:05 +0000 (0:00:00.913) 0:05:07.144 ********** 2025-06-03 15:54:24.665737 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:54:24.665743 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:54:24.665749 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:54:24.665755 | orchestrator | 2025-06-03 15:54:24.665761 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-03 15:54:24.665768 | orchestrator | Tuesday 03 June 2025 15:50:05 +0000 (0:00:00.633) 0:05:07.778 ********** 2025-06-03 15:54:24.665775 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-03 15:54:24.665784 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-03 15:54:24.665791 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-03 15:54:24.665840 | orchestrator | 2025-06-03 15:54:24.665846 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-03 15:54:24.665853 | orchestrator | Tuesday 03 June 2025 15:50:07 +0000 (0:00:01.594) 0:05:09.372 ********** 2025-06-03 15:54:24.665859 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-03 15:54:24.665865 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-03 15:54:24.665872 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-03 15:54:24.665878 | orchestrator | 2025-06-03 15:54:24.665931 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-03 15:54:24.665937 | orchestrator | Tuesday 03 June 2025 15:50:08 +0000 (0:00:01.252) 0:05:10.625 ********** 2025-06-03 15:54:24.665944 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-03 15:54:24.665949 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-03 15:54:24.665956 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-03 15:54:24.665962 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-03 15:54:24.665968 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-03 15:54:24.665974 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-03 15:54:24.665981 | orchestrator | 2025-06-03 15:54:24.665987 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-03 15:54:24.665993 | orchestrator | Tuesday 03 June 2025 15:50:13 +0000 (0:00:04.778) 0:05:15.403 ********** 2025-06-03 15:54:24.666005 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.666036 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.666044 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.666051 | orchestrator | 2025-06-03 15:54:24.666057 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-03 15:54:24.666064 | orchestrator | Tuesday 03 June 2025 15:50:13 +0000 (0:00:00.325) 0:05:15.729 ********** 2025-06-03 15:54:24.666070 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.666077 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.666082 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.666088 | orchestrator | 2025-06-03 15:54:24.666094 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-03 15:54:24.666100 | orchestrator | Tuesday 03 June 2025 15:50:14 +0000 (0:00:00.438) 0:05:16.167 ********** 2025-06-03 15:54:24.666106 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.666113 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.666119 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.666124 | orchestrator | 2025-06-03 15:54:24.666130 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-03 15:54:24.666136 | orchestrator | Tuesday 03 June 2025 15:50:16 +0000 (0:00:02.156) 0:05:18.324 ********** 2025-06-03 15:54:24.666144 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-03 15:54:24.666151 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-03 15:54:24.666158 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-03 15:54:24.666165 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-03 15:54:24.666171 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-03 15:54:24.666178 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-03 15:54:24.666184 | orchestrator | 2025-06-03 15:54:24.666201 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-03 15:54:24.666208 | orchestrator | Tuesday 03 June 2025 15:50:20 +0000 (0:00:03.748) 0:05:22.073 ********** 2025-06-03 15:54:24.666214 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:54:24.666221 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:54:24.666228 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:54:24.666234 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:54:24.666241 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.666247 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:54:24.666254 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.666260 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:54:24.666267 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.666273 | orchestrator | 2025-06-03 15:54:24.666279 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-03 15:54:24.666286 | orchestrator | Tuesday 03 June 2025 15:50:23 +0000 (0:00:03.812) 0:05:25.885 ********** 2025-06-03 15:54:24.666292 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.666297 | orchestrator | 2025-06-03 15:54:24.666303 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-03 15:54:24.666309 | orchestrator | Tuesday 03 June 2025 15:50:24 +0000 (0:00:00.176) 0:05:26.061 ********** 2025-06-03 15:54:24.666318 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.666325 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.666333 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.666347 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.666353 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.666359 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.666365 | orchestrator | 2025-06-03 15:54:24.666371 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-03 15:54:24.666387 | orchestrator | Tuesday 03 June 2025 15:50:25 +0000 (0:00:01.189) 0:05:27.251 ********** 2025-06-03 15:54:24.666394 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:54:24.666401 | orchestrator | 2025-06-03 15:54:24.666407 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-03 15:54:24.666414 | orchestrator | Tuesday 03 June 2025 15:50:26 +0000 (0:00:00.804) 0:05:28.055 ********** 2025-06-03 15:54:24.666420 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.666425 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.666431 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.666436 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.666444 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.666451 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.666459 | orchestrator | 2025-06-03 15:54:24.666466 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-03 15:54:24.666471 | orchestrator | Tuesday 03 June 2025 15:50:26 +0000 (0:00:00.556) 0:05:28.612 ********** 2025-06-03 15:54:24.666478 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666594 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666609 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.666623 | orchestrator | 2025-06-03 15:54:24.666629 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-03 15:54:24.666638 | orchestrator | Tuesday 03 June 2025 15:50:30 +0000 (0:00:04.216) 0:05:32.828 ********** 2025-06-03 15:54:24.666644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.666653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.666660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.666666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.666677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.666688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.667046 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.667079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.667086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.667094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.667109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.667125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.667138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.667145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.667151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.667157 | orchestrator | 2025-06-03 15:54:24.667163 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-03 15:54:24.667169 | orchestrator | Tuesday 03 June 2025 15:50:37 +0000 (0:00:06.767) 0:05:39.596 ********** 2025-06-03 15:54:24.667178 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.667184 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.667190 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.667198 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.667203 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.667208 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.667215 | orchestrator | 2025-06-03 15:54:24.667221 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-03 15:54:24.667228 | orchestrator | Tuesday 03 June 2025 15:50:39 +0000 (0:00:01.755) 0:05:41.352 ********** 2025-06-03 15:54:24.667238 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-03 15:54:24.667250 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-03 15:54:24.667259 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-03 15:54:24.667265 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-03 15:54:24.667272 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-03 15:54:24.667277 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-03 15:54:24.667285 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.667291 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-03 15:54:24.667297 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-03 15:54:24.667304 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.667313 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-03 15:54:24.667319 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.667325 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-03 15:54:24.667330 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-03 15:54:24.667336 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-03 15:54:24.667342 | orchestrator | 2025-06-03 15:54:24.667349 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-03 15:54:24.667355 | orchestrator | Tuesday 03 June 2025 15:50:42 +0000 (0:00:03.378) 0:05:44.730 ********** 2025-06-03 15:54:24.667361 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.667366 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.667372 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.667381 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.667386 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.667391 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.667397 | orchestrator | 2025-06-03 15:54:24.667404 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-03 15:54:24.667410 | orchestrator | Tuesday 03 June 2025 15:50:43 +0000 (0:00:00.637) 0:05:45.367 ********** 2025-06-03 15:54:24.667418 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-03 15:54:24.667424 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-03 15:54:24.667434 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-03 15:54:24.667441 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-03 15:54:24.667446 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-03 15:54:24.667453 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-03 15:54:24.667459 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-03 15:54:24.667464 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-03 15:54:24.667470 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-03 15:54:24.667475 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:54:24.667482 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-03 15:54:24.667498 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.667504 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-03 15:54:24.667510 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.667516 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-03 15:54:24.667522 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.667529 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:54:24.667534 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:54:24.667543 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:54:24.667550 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:54:24.667556 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:54:24.667561 | orchestrator | 2025-06-03 15:54:24.667569 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-03 15:54:24.667575 | orchestrator | Tuesday 03 June 2025 15:50:49 +0000 (0:00:05.614) 0:05:50.982 ********** 2025-06-03 15:54:24.667581 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:54:24.667587 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:54:24.667594 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:54:24.667599 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:54:24.667605 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-03 15:54:24.667615 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:54:24.667623 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-03 15:54:24.667637 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-03 15:54:24.667643 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:54:24.667649 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:54:24.667659 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:54:24.667665 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:54:24.667671 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:54:24.667679 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:54:24.667686 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-03 15:54:24.667693 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.667699 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-03 15:54:24.667705 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.667738 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:54:24.667745 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-03 15:54:24.667751 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.667758 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:54:24.667765 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:54:24.667780 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:54:24.667787 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:54:24.667793 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:54:24.667800 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:54:24.667805 | orchestrator | 2025-06-03 15:54:24.667811 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-03 15:54:24.667817 | orchestrator | Tuesday 03 June 2025 15:50:59 +0000 (0:00:10.217) 0:06:01.199 ********** 2025-06-03 15:54:24.667823 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.667830 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.667837 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.667844 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.667851 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.667856 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.667863 | orchestrator | 2025-06-03 15:54:24.667870 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-03 15:54:24.667876 | orchestrator | Tuesday 03 June 2025 15:50:59 +0000 (0:00:00.700) 0:06:01.900 ********** 2025-06-03 15:54:24.667882 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.667889 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.667894 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.667900 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.667906 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.667912 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.667918 | orchestrator | 2025-06-03 15:54:24.667925 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-03 15:54:24.667933 | orchestrator | Tuesday 03 June 2025 15:51:00 +0000 (0:00:00.870) 0:06:02.770 ********** 2025-06-03 15:54:24.667939 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.667945 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.667951 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.667957 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.667963 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.667969 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.667976 | orchestrator | 2025-06-03 15:54:24.667982 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-03 15:54:24.667988 | orchestrator | Tuesday 03 June 2025 15:51:03 +0000 (0:00:02.495) 0:06:05.266 ********** 2025-06-03 15:54:24.667997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.668009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.668021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.668028 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.668040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.668047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.668053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.668060 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.668069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:54:24.668080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.668090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:54:24.668097 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.668104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:54:24.668110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.668116 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.668122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:54:24.668133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.668144 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.668150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:54:24.668159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:54:24.668165 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.668171 | orchestrator | 2025-06-03 15:54:24.668179 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-03 15:54:24.668185 | orchestrator | Tuesday 03 June 2025 15:51:04 +0000 (0:00:01.618) 0:06:06.885 ********** 2025-06-03 15:54:24.668190 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-03 15:54:24.668197 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-03 15:54:24.668202 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.668209 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-03 15:54:24.668214 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-03 15:54:24.668220 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.668225 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-03 15:54:24.668231 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-03 15:54:24.668236 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.668242 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-03 15:54:24.668248 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-03 15:54:24.668254 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.668260 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-03 15:54:24.668265 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-03 15:54:24.668271 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.668277 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-03 15:54:24.668282 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-03 15:54:24.668288 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.668294 | orchestrator | 2025-06-03 15:54:24.668299 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-03 15:54:24.668305 | orchestrator | Tuesday 03 June 2025 15:51:05 +0000 (0:00:00.552) 0:06:07.437 ********** 2025-06-03 15:54:24.668311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668336 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668421 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668428 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:54:24.668434 | orchestrator | 2025-06-03 15:54:24.668440 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-03 15:54:24.668447 | orchestrator | Tuesday 03 June 2025 15:51:08 +0000 (0:00:02.894) 0:06:10.332 ********** 2025-06-03 15:54:24.668452 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.668458 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.668464 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.668473 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.668479 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.668485 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.668491 | orchestrator | 2025-06-03 15:54:24.668496 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:54:24.668502 | orchestrator | Tuesday 03 June 2025 15:51:08 +0000 (0:00:00.499) 0:06:10.831 ********** 2025-06-03 15:54:24.668508 | orchestrator | 2025-06-03 15:54:24.668513 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:54:24.668518 | orchestrator | Tuesday 03 June 2025 15:51:09 +0000 (0:00:00.259) 0:06:11.091 ********** 2025-06-03 15:54:24.668524 | orchestrator | 2025-06-03 15:54:24.668530 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:54:24.668536 | orchestrator | Tuesday 03 June 2025 15:51:09 +0000 (0:00:00.124) 0:06:11.215 ********** 2025-06-03 15:54:24.668542 | orchestrator | 2025-06-03 15:54:24.668547 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:54:24.668558 | orchestrator | Tuesday 03 June 2025 15:51:09 +0000 (0:00:00.134) 0:06:11.350 ********** 2025-06-03 15:54:24.668564 | orchestrator | 2025-06-03 15:54:24.668570 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:54:24.668576 | orchestrator | Tuesday 03 June 2025 15:51:09 +0000 (0:00:00.139) 0:06:11.489 ********** 2025-06-03 15:54:24.668581 | orchestrator | 2025-06-03 15:54:24.668587 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:54:24.668593 | orchestrator | Tuesday 03 June 2025 15:51:09 +0000 (0:00:00.121) 0:06:11.610 ********** 2025-06-03 15:54:24.668598 | orchestrator | 2025-06-03 15:54:24.668605 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-03 15:54:24.668611 | orchestrator | Tuesday 03 June 2025 15:51:09 +0000 (0:00:00.124) 0:06:11.735 ********** 2025-06-03 15:54:24.668617 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.668623 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:24.668630 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:24.668635 | orchestrator | 2025-06-03 15:54:24.668641 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-03 15:54:24.668647 | orchestrator | Tuesday 03 June 2025 15:51:22 +0000 (0:00:13.118) 0:06:24.854 ********** 2025-06-03 15:54:24.668654 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.668660 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:24.668665 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:24.668671 | orchestrator | 2025-06-03 15:54:24.668677 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-03 15:54:24.668682 | orchestrator | Tuesday 03 June 2025 15:51:35 +0000 (0:00:12.990) 0:06:37.844 ********** 2025-06-03 15:54:24.668688 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.668694 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.668701 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.668727 | orchestrator | 2025-06-03 15:54:24.668735 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-03 15:54:24.668741 | orchestrator | Tuesday 03 June 2025 15:51:56 +0000 (0:00:20.377) 0:06:58.222 ********** 2025-06-03 15:54:24.668747 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.668753 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.668760 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.668766 | orchestrator | 2025-06-03 15:54:24.668772 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-03 15:54:24.668778 | orchestrator | Tuesday 03 June 2025 15:52:43 +0000 (0:00:46.919) 0:07:45.141 ********** 2025-06-03 15:54:24.668784 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.668791 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.668796 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.668802 | orchestrator | 2025-06-03 15:54:24.668808 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-03 15:54:24.668815 | orchestrator | Tuesday 03 June 2025 15:52:44 +0000 (0:00:01.119) 0:07:46.261 ********** 2025-06-03 15:54:24.668822 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.668828 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.668833 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.668839 | orchestrator | 2025-06-03 15:54:24.668845 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-03 15:54:24.668859 | orchestrator | Tuesday 03 June 2025 15:52:45 +0000 (0:00:00.836) 0:07:47.097 ********** 2025-06-03 15:54:24.668865 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:54:24.668870 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:54:24.668876 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:54:24.668882 | orchestrator | 2025-06-03 15:54:24.668887 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-03 15:54:24.668893 | orchestrator | Tuesday 03 June 2025 15:53:11 +0000 (0:00:26.572) 0:08:13.669 ********** 2025-06-03 15:54:24.668900 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.668912 | orchestrator | 2025-06-03 15:54:24.668918 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-03 15:54:24.668925 | orchestrator | Tuesday 03 June 2025 15:53:11 +0000 (0:00:00.123) 0:08:13.793 ********** 2025-06-03 15:54:24.668931 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.668936 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.668942 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.668949 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.668957 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.668964 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-03 15:54:24.668970 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:54:24.668975 | orchestrator | 2025-06-03 15:54:24.668981 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-03 15:54:24.668987 | orchestrator | Tuesday 03 June 2025 15:53:34 +0000 (0:00:22.605) 0:08:36.398 ********** 2025-06-03 15:54:24.668993 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.668998 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.669006 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.669011 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.669024 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.669030 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.669036 | orchestrator | 2025-06-03 15:54:24.669043 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-03 15:54:24.669049 | orchestrator | Tuesday 03 June 2025 15:53:43 +0000 (0:00:08.890) 0:08:45.289 ********** 2025-06-03 15:54:24.669054 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.669060 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.669068 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.669074 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.669080 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.669086 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-06-03 15:54:24.669092 | orchestrator | 2025-06-03 15:54:24.669097 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-03 15:54:24.669103 | orchestrator | Tuesday 03 June 2025 15:53:47 +0000 (0:00:04.102) 0:08:49.391 ********** 2025-06-03 15:54:24.669109 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:54:24.669115 | orchestrator | 2025-06-03 15:54:24.669121 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-03 15:54:24.669126 | orchestrator | Tuesday 03 June 2025 15:54:00 +0000 (0:00:13.545) 0:09:02.937 ********** 2025-06-03 15:54:24.669132 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:54:24.669137 | orchestrator | 2025-06-03 15:54:24.669143 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-03 15:54:24.669150 | orchestrator | Tuesday 03 June 2025 15:54:02 +0000 (0:00:01.305) 0:09:04.242 ********** 2025-06-03 15:54:24.669155 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.669161 | orchestrator | 2025-06-03 15:54:24.669167 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-03 15:54:24.669173 | orchestrator | Tuesday 03 June 2025 15:54:03 +0000 (0:00:01.233) 0:09:05.475 ********** 2025-06-03 15:54:24.669179 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:54:24.669185 | orchestrator | 2025-06-03 15:54:24.669191 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-03 15:54:24.669197 | orchestrator | Tuesday 03 June 2025 15:54:16 +0000 (0:00:12.561) 0:09:18.037 ********** 2025-06-03 15:54:24.669203 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:54:24.669209 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:54:24.669214 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:54:24.669220 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:24.669226 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:54:24.669239 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:54:24.669245 | orchestrator | 2025-06-03 15:54:24.669251 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-03 15:54:24.669258 | orchestrator | 2025-06-03 15:54:24.669264 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-03 15:54:24.669271 | orchestrator | Tuesday 03 June 2025 15:54:17 +0000 (0:00:01.607) 0:09:19.645 ********** 2025-06-03 15:54:24.669276 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:24.669283 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:24.669289 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:24.669296 | orchestrator | 2025-06-03 15:54:24.669301 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-03 15:54:24.669309 | orchestrator | 2025-06-03 15:54:24.669315 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-03 15:54:24.669321 | orchestrator | Tuesday 03 June 2025 15:54:18 +0000 (0:00:01.064) 0:09:20.709 ********** 2025-06-03 15:54:24.669327 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.669333 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.669339 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.669345 | orchestrator | 2025-06-03 15:54:24.669350 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-03 15:54:24.669356 | orchestrator | 2025-06-03 15:54:24.669363 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-03 15:54:24.669370 | orchestrator | Tuesday 03 June 2025 15:54:19 +0000 (0:00:00.471) 0:09:21.180 ********** 2025-06-03 15:54:24.669376 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-03 15:54:24.669381 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-03 15:54:24.669388 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-03 15:54:24.669493 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-03 15:54:24.669516 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-03 15:54:24.669523 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-03 15:54:24.669529 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:54:24.669535 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-03 15:54:24.669542 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-03 15:54:24.669549 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-03 15:54:24.669555 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-03 15:54:24.669562 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-03 15:54:24.669568 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-03 15:54:24.669574 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-03 15:54:24.669580 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-03 15:54:24.669587 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-03 15:54:24.669593 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-03 15:54:24.669600 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-03 15:54:24.669606 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-03 15:54:24.669612 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:54:24.669618 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-03 15:54:24.669632 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-03 15:54:24.669638 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-03 15:54:24.669645 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-03 15:54:24.669652 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-03 15:54:24.669658 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-03 15:54:24.669665 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:54:24.669677 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-03 15:54:24.669684 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-03 15:54:24.669690 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-03 15:54:24.669696 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-03 15:54:24.669703 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-03 15:54:24.669754 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-03 15:54:24.669760 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.669766 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.669772 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-03 15:54:24.669778 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-03 15:54:24.669785 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-03 15:54:24.669791 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-03 15:54:24.669797 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-03 15:54:24.669803 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-03 15:54:24.669809 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.669816 | orchestrator | 2025-06-03 15:54:24.669822 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-03 15:54:24.669828 | orchestrator | 2025-06-03 15:54:24.669835 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-03 15:54:24.669841 | orchestrator | Tuesday 03 June 2025 15:54:20 +0000 (0:00:01.159) 0:09:22.340 ********** 2025-06-03 15:54:24.669848 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-03 15:54:24.669855 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-03 15:54:24.669861 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.669867 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-03 15:54:24.669874 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-03 15:54:24.669880 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.669886 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-03 15:54:24.669893 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-03 15:54:24.669899 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.669905 | orchestrator | 2025-06-03 15:54:24.669912 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-03 15:54:24.669918 | orchestrator | 2025-06-03 15:54:24.669925 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-03 15:54:24.669931 | orchestrator | Tuesday 03 June 2025 15:54:21 +0000 (0:00:00.770) 0:09:23.111 ********** 2025-06-03 15:54:24.669937 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.669944 | orchestrator | 2025-06-03 15:54:24.669950 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-03 15:54:24.669956 | orchestrator | 2025-06-03 15:54:24.669962 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-03 15:54:24.669969 | orchestrator | Tuesday 03 June 2025 15:54:21 +0000 (0:00:00.751) 0:09:23.862 ********** 2025-06-03 15:54:24.669975 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:24.669981 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:24.669986 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:24.669992 | orchestrator | 2025-06-03 15:54:24.669997 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:54:24.670004 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:54:24.670064 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-03 15:54:24.670074 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-03 15:54:24.670086 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-03 15:54:24.670092 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-03 15:54:24.670098 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-06-03 15:54:24.670104 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-03 15:54:24.670110 | orchestrator | 2025-06-03 15:54:24.670117 | orchestrator | 2025-06-03 15:54:24.670123 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:54:24.670129 | orchestrator | Tuesday 03 June 2025 15:54:22 +0000 (0:00:00.458) 0:09:24.321 ********** 2025-06-03 15:54:24.670135 | orchestrator | =============================================================================== 2025-06-03 15:54:24.670148 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 46.92s 2025-06-03 15:54:24.670155 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.43s 2025-06-03 15:54:24.670162 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.57s 2025-06-03 15:54:24.670168 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.22s 2025-06-03 15:54:24.670174 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.61s 2025-06-03 15:54:24.670181 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.42s 2025-06-03 15:54:24.670188 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.38s 2025-06-03 15:54:24.670194 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.96s 2025-06-03 15:54:24.670201 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 16.42s 2025-06-03 15:54:24.670207 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.83s 2025-06-03 15:54:24.670214 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.55s 2025-06-03 15:54:24.670221 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.12s 2025-06-03 15:54:24.670228 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.99s 2025-06-03 15:54:24.670234 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.63s 2025-06-03 15:54:24.670241 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.56s 2025-06-03 15:54:24.670249 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.84s 2025-06-03 15:54:24.670256 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.63s 2025-06-03 15:54:24.670262 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.22s 2025-06-03 15:54:24.670269 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.95s 2025-06-03 15:54:24.670275 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.65s 2025-06-03 15:54:24.670281 | orchestrator | 2025-06-03 15:54:24 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:24.670289 | orchestrator | 2025-06-03 15:54:24 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:24.670295 | orchestrator | 2025-06-03 15:54:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:27.710547 | orchestrator | 2025-06-03 15:54:27 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:27.713274 | orchestrator | 2025-06-03 15:54:27 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:27.713396 | orchestrator | 2025-06-03 15:54:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:30.760317 | orchestrator | 2025-06-03 15:54:30 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:30.762477 | orchestrator | 2025-06-03 15:54:30 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:30.762554 | orchestrator | 2025-06-03 15:54:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:33.811188 | orchestrator | 2025-06-03 15:54:33 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:33.812315 | orchestrator | 2025-06-03 15:54:33 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:33.812368 | orchestrator | 2025-06-03 15:54:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:36.865854 | orchestrator | 2025-06-03 15:54:36 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state STARTED 2025-06-03 15:54:36.866582 | orchestrator | 2025-06-03 15:54:36 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:36.866936 | orchestrator | 2025-06-03 15:54:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:39.906074 | orchestrator | 2025-06-03 15:54:39 | INFO  | Task 7b559b66-a836-4615-9037-b5b98b5d5dba is in state SUCCESS 2025-06-03 15:54:39.907131 | orchestrator | 2025-06-03 15:54:39.907284 | orchestrator | 2025-06-03 15:54:39.907301 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:54:39.907311 | orchestrator | 2025-06-03 15:54:39.907320 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:54:39.907330 | orchestrator | Tuesday 03 June 2025 15:52:07 +0000 (0:00:00.359) 0:00:00.359 ********** 2025-06-03 15:54:39.907339 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:39.907349 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:54:39.907358 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:54:39.907367 | orchestrator | 2025-06-03 15:54:39.907377 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:54:39.907386 | orchestrator | Tuesday 03 June 2025 15:52:08 +0000 (0:00:00.372) 0:00:00.731 ********** 2025-06-03 15:54:39.907395 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-03 15:54:39.907404 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-03 15:54:39.907412 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-03 15:54:39.907421 | orchestrator | 2025-06-03 15:54:39.907430 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-03 15:54:39.907439 | orchestrator | 2025-06-03 15:54:39.907448 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-03 15:54:39.907457 | orchestrator | Tuesday 03 June 2025 15:52:08 +0000 (0:00:00.432) 0:00:01.164 ********** 2025-06-03 15:54:39.907466 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:39.907475 | orchestrator | 2025-06-03 15:54:39.907484 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-03 15:54:39.907493 | orchestrator | Tuesday 03 June 2025 15:52:09 +0000 (0:00:00.614) 0:00:01.778 ********** 2025-06-03 15:54:39.907505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.907543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.907553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.907996 | orchestrator | 2025-06-03 15:54:39.908014 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-03 15:54:39.908024 | orchestrator | Tuesday 03 June 2025 15:52:10 +0000 (0:00:00.894) 0:00:02.673 ********** 2025-06-03 15:54:39.908033 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-03 15:54:39.908043 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-03 15:54:39.908066 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:54:39.908075 | orchestrator | 2025-06-03 15:54:39.908084 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-03 15:54:39.908093 | orchestrator | Tuesday 03 June 2025 15:52:11 +0000 (0:00:01.027) 0:00:03.700 ********** 2025-06-03 15:54:39.908102 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:54:39.908111 | orchestrator | 2025-06-03 15:54:39.908120 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-03 15:54:39.908129 | orchestrator | Tuesday 03 June 2025 15:52:12 +0000 (0:00:00.949) 0:00:04.650 ********** 2025-06-03 15:54:39.908151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.908235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.908681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.908733 | orchestrator | 2025-06-03 15:54:39.908753 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-03 15:54:39.908765 | orchestrator | Tuesday 03 June 2025 15:52:13 +0000 (0:00:01.583) 0:00:06.233 ********** 2025-06-03 15:54:39.908777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:54:39.908788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:54:39.908809 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:39.908821 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:39.908876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:54:39.908890 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:39.908901 | orchestrator | 2025-06-03 15:54:39.908912 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-03 15:54:39.908923 | orchestrator | Tuesday 03 June 2025 15:52:13 +0000 (0:00:00.340) 0:00:06.574 ********** 2025-06-03 15:54:39.908934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:54:39.908955 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:39.908967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:54:39.908979 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:39.909036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:54:39.909050 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:39.909061 | orchestrator | 2025-06-03 15:54:39.909072 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-03 15:54:39.909083 | orchestrator | Tuesday 03 June 2025 15:52:14 +0000 (0:00:00.967) 0:00:07.542 ********** 2025-06-03 15:54:39.909095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.909147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.909162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.909182 | orchestrator | 2025-06-03 15:54:39.909193 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-03 15:54:39.909204 | orchestrator | Tuesday 03 June 2025 15:52:16 +0000 (0:00:01.289) 0:00:08.831 ********** 2025-06-03 15:54:39.909233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.909257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.909269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.909283 | orchestrator | 2025-06-03 15:54:39.909300 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-03 15:54:39.909324 | orchestrator | Tuesday 03 June 2025 15:52:17 +0000 (0:00:01.318) 0:00:10.150 ********** 2025-06-03 15:54:39.909352 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:39.909369 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:39.909386 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:39.909404 | orchestrator | 2025-06-03 15:54:39.909420 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-03 15:54:39.909437 | orchestrator | Tuesday 03 June 2025 15:52:17 +0000 (0:00:00.450) 0:00:10.600 ********** 2025-06-03 15:54:39.909456 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-03 15:54:39.909482 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-03 15:54:39.909501 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-03 15:54:39.909518 | orchestrator | 2025-06-03 15:54:39.909537 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-03 15:54:39.909576 | orchestrator | Tuesday 03 June 2025 15:52:19 +0000 (0:00:01.304) 0:00:11.905 ********** 2025-06-03 15:54:39.909594 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-03 15:54:39.909686 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-03 15:54:39.909738 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-03 15:54:39.909758 | orchestrator | 2025-06-03 15:54:39.909777 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-03 15:54:39.909795 | orchestrator | Tuesday 03 June 2025 15:52:20 +0000 (0:00:01.333) 0:00:13.238 ********** 2025-06-03 15:54:39.909815 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:54:39.909827 | orchestrator | 2025-06-03 15:54:39.909838 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-03 15:54:39.909848 | orchestrator | Tuesday 03 June 2025 15:52:21 +0000 (0:00:00.685) 0:00:13.923 ********** 2025-06-03 15:54:39.909859 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-03 15:54:39.909870 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-03 15:54:39.909881 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:54:39.909892 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:39.909903 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:54:39.909914 | orchestrator | 2025-06-03 15:54:39.909925 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-03 15:54:39.909937 | orchestrator | Tuesday 03 June 2025 15:52:21 +0000 (0:00:00.660) 0:00:14.584 ********** 2025-06-03 15:54:39.909955 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:39.909972 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:39.909996 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:39.910081 | orchestrator | 2025-06-03 15:54:39.910103 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-03 15:54:39.910120 | orchestrator | Tuesday 03 June 2025 15:52:22 +0000 (0:00:00.415) 0:00:15.000 ********** 2025-06-03 15:54:39.910139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1095997, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6586518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1095997, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6586518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1095997, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6586518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1095978, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6546519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1095978, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6546519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1095978, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6546519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1095963, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6516516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1095963, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6516516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1095963, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6516516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1095991, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6566517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1095991, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6566517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1095991, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6566517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1095945, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6476517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1095945, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6476517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1095945, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6476517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1095968, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6526518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1095968, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6526518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1095968, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6526518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1095986, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6556518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1095986, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6556518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1095986, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6556518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1095943, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6476517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1095943, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6476517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1095943, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6476517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095917, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6426516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095917, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6426516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095917, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6426516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1095947, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6486516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1095947, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6486516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1095947, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6486516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095928, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6446517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095928, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6446517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095928, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6446517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1095983, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6546519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1095983, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6546519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1095983, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6546519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1095949, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6496518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1095949, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6496518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1095949, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6496518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1095995, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6566517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.910986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1095995, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6566517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1095995, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6566517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1095938, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6466517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1095938, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6466517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1095938, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6466517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1095974, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6536517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1095974, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6536517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1095974, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6536517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095919, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6446517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095919, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6446517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095919, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6446517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095930, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6466517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095930, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6466517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095930, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6466517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1095960, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6506517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1095960, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6506517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1095960, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6506517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096087, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6846523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096087, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6846523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096073, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.671652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096087, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6846523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096073, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.671652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096009, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.660652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096073, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.671652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096009, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.660652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096126, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6916523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096009, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.660652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096126, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6916523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096012, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6616518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096126, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6916523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096012, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6616518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096123, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6886523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096012, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6616518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096123, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6886523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096140, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6966524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096123, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6886523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096140, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6966524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096109, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6856523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096140, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6966524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096109, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6856523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096121, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6876523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096109, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6856523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096121, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6876523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096021, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6626518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.911986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096021, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6626518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096121, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6876523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096075, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.673652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096075, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.673652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096021, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6626518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096151, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6976523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096151, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6976523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096075, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.673652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096124, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6896522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096124, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6896522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096151, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6976523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096032, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.665652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096032, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.665652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096124, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6896522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096028, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.663652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096028, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.663652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096032, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.665652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096042, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.666652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096042, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.666652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096028, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.663652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096051, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.670652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096051, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.670652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096042, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.666652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096081, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.673652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096081, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.673652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096051, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.670652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096118, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6866522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096118, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6866522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096081, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.673652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096084, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.673652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096084, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.673652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096167, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7016525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096118, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.6866522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096167, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7016525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096084, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.673652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096167, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748963084.7016525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:54:39.912628 | orchestrator | 2025-06-03 15:54:39.912639 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-03 15:54:39.912650 | orchestrator | Tuesday 03 June 2025 15:53:01 +0000 (0:00:39.127) 0:00:54.127 ********** 2025-06-03 15:54:39.912661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.912673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.912717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:54:39.912747 | orchestrator | 2025-06-03 15:54:39.912759 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-03 15:54:39.912770 | orchestrator | Tuesday 03 June 2025 15:53:02 +0000 (0:00:00.997) 0:00:55.125 ********** 2025-06-03 15:54:39.912781 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:39.912793 | orchestrator | 2025-06-03 15:54:39.912804 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-03 15:54:39.912820 | orchestrator | Tuesday 03 June 2025 15:53:04 +0000 (0:00:02.469) 0:00:57.594 ********** 2025-06-03 15:54:39.912832 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:39.912842 | orchestrator | 2025-06-03 15:54:39.912853 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-03 15:54:39.912864 | orchestrator | Tuesday 03 June 2025 15:53:07 +0000 (0:00:02.582) 0:01:00.177 ********** 2025-06-03 15:54:39.912874 | orchestrator | 2025-06-03 15:54:39.912885 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-03 15:54:39.912896 | orchestrator | Tuesday 03 June 2025 15:53:07 +0000 (0:00:00.255) 0:01:00.433 ********** 2025-06-03 15:54:39.912907 | orchestrator | 2025-06-03 15:54:39.912917 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-03 15:54:39.912928 | orchestrator | Tuesday 03 June 2025 15:53:07 +0000 (0:00:00.064) 0:01:00.497 ********** 2025-06-03 15:54:39.912938 | orchestrator | 2025-06-03 15:54:39.912949 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-03 15:54:39.912960 | orchestrator | Tuesday 03 June 2025 15:53:07 +0000 (0:00:00.065) 0:01:00.563 ********** 2025-06-03 15:54:39.912970 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:39.912981 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:39.912992 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:54:39.913003 | orchestrator | 2025-06-03 15:54:39.913013 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-03 15:54:39.913024 | orchestrator | Tuesday 03 June 2025 15:53:10 +0000 (0:00:02.059) 0:01:02.623 ********** 2025-06-03 15:54:39.913035 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:39.913046 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:39.913057 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-03 15:54:39.913068 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-03 15:54:39.913079 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-03 15:54:39.913091 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2025-06-03 15:54:39.913102 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:39.913120 | orchestrator | 2025-06-03 15:54:39.913141 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-03 15:54:39.913160 | orchestrator | Tuesday 03 June 2025 15:54:02 +0000 (0:00:52.045) 0:01:54.669 ********** 2025-06-03 15:54:39.913176 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:39.913194 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:54:39.913205 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:54:39.913216 | orchestrator | 2025-06-03 15:54:39.913226 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-03 15:54:39.913237 | orchestrator | Tuesday 03 June 2025 15:54:32 +0000 (0:00:30.181) 0:02:24.851 ********** 2025-06-03 15:54:39.913248 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:54:39.913264 | orchestrator | 2025-06-03 15:54:39.913283 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-03 15:54:39.913302 | orchestrator | Tuesday 03 June 2025 15:54:34 +0000 (0:00:02.698) 0:02:27.550 ********** 2025-06-03 15:54:39.913321 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:39.913333 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:54:39.913343 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:54:39.913361 | orchestrator | 2025-06-03 15:54:39.913372 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-03 15:54:39.913383 | orchestrator | Tuesday 03 June 2025 15:54:35 +0000 (0:00:00.296) 0:02:27.846 ********** 2025-06-03 15:54:39.913396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-03 15:54:39.913408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-03 15:54:39.913420 | orchestrator | 2025-06-03 15:54:39.913431 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-03 15:54:39.913441 | orchestrator | Tuesday 03 June 2025 15:54:37 +0000 (0:00:02.600) 0:02:30.446 ********** 2025-06-03 15:54:39.913452 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:54:39.913463 | orchestrator | 2025-06-03 15:54:39.913474 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:54:39.913485 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:54:39.913503 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:54:39.913514 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:54:39.913525 | orchestrator | 2025-06-03 15:54:39.913535 | orchestrator | 2025-06-03 15:54:39.913546 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:54:39.913557 | orchestrator | Tuesday 03 June 2025 15:54:38 +0000 (0:00:00.242) 0:02:30.688 ********** 2025-06-03 15:54:39.913573 | orchestrator | =============================================================================== 2025-06-03 15:54:39.913584 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 52.05s 2025-06-03 15:54:39.913595 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.13s 2025-06-03 15:54:39.913606 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.18s 2025-06-03 15:54:39.913617 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.70s 2025-06-03 15:54:39.913628 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.60s 2025-06-03 15:54:39.913638 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.58s 2025-06-03 15:54:39.913649 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.47s 2025-06-03 15:54:39.913660 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.06s 2025-06-03 15:54:39.913670 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.58s 2025-06-03 15:54:39.913681 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.33s 2025-06-03 15:54:39.913717 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.32s 2025-06-03 15:54:39.913731 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.30s 2025-06-03 15:54:39.913742 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.29s 2025-06-03 15:54:39.913752 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.03s 2025-06-03 15:54:39.913763 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.00s 2025-06-03 15:54:39.913774 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.97s 2025-06-03 15:54:39.913793 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.95s 2025-06-03 15:54:39.913804 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.89s 2025-06-03 15:54:39.913815 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.69s 2025-06-03 15:54:39.913825 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.66s 2025-06-03 15:54:39.913836 | orchestrator | 2025-06-03 15:54:39 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:39.913847 | orchestrator | 2025-06-03 15:54:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:42.947435 | orchestrator | 2025-06-03 15:54:42 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:42.947545 | orchestrator | 2025-06-03 15:54:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:45.990454 | orchestrator | 2025-06-03 15:54:45 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:45.991461 | orchestrator | 2025-06-03 15:54:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:49.033625 | orchestrator | 2025-06-03 15:54:49 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:49.033761 | orchestrator | 2025-06-03 15:54:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:52.086092 | orchestrator | 2025-06-03 15:54:52 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:52.086167 | orchestrator | 2025-06-03 15:54:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:55.132175 | orchestrator | 2025-06-03 15:54:55 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:55.132264 | orchestrator | 2025-06-03 15:54:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:58.174891 | orchestrator | 2025-06-03 15:54:58 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:54:58.174977 | orchestrator | 2025-06-03 15:54:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:01.221335 | orchestrator | 2025-06-03 15:55:01 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:01.221455 | orchestrator | 2025-06-03 15:55:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:04.264475 | orchestrator | 2025-06-03 15:55:04 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:04.264572 | orchestrator | 2025-06-03 15:55:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:07.304494 | orchestrator | 2025-06-03 15:55:07 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:07.304568 | orchestrator | 2025-06-03 15:55:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:10.352282 | orchestrator | 2025-06-03 15:55:10 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:10.352352 | orchestrator | 2025-06-03 15:55:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:13.399922 | orchestrator | 2025-06-03 15:55:13 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:13.399994 | orchestrator | 2025-06-03 15:55:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:16.443369 | orchestrator | 2025-06-03 15:55:16 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:16.443440 | orchestrator | 2025-06-03 15:55:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:19.492039 | orchestrator | 2025-06-03 15:55:19 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:19.492146 | orchestrator | 2025-06-03 15:55:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:22.547876 | orchestrator | 2025-06-03 15:55:22 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:22.548039 | orchestrator | 2025-06-03 15:55:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:25.599733 | orchestrator | 2025-06-03 15:55:25 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:25.599823 | orchestrator | 2025-06-03 15:55:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:28.648144 | orchestrator | 2025-06-03 15:55:28 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:28.648247 | orchestrator | 2025-06-03 15:55:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:31.689254 | orchestrator | 2025-06-03 15:55:31 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:31.689371 | orchestrator | 2025-06-03 15:55:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:34.739878 | orchestrator | 2025-06-03 15:55:34 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:34.739967 | orchestrator | 2025-06-03 15:55:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:37.796105 | orchestrator | 2025-06-03 15:55:37 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:37.796200 | orchestrator | 2025-06-03 15:55:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:40.838923 | orchestrator | 2025-06-03 15:55:40 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:40.839020 | orchestrator | 2025-06-03 15:55:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:43.885846 | orchestrator | 2025-06-03 15:55:43 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:43.885913 | orchestrator | 2025-06-03 15:55:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:46.936178 | orchestrator | 2025-06-03 15:55:46 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:46.936302 | orchestrator | 2025-06-03 15:55:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:49.978093 | orchestrator | 2025-06-03 15:55:49 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:49.978193 | orchestrator | 2025-06-03 15:55:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:53.030310 | orchestrator | 2025-06-03 15:55:53 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:53.030396 | orchestrator | 2025-06-03 15:55:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:56.066580 | orchestrator | 2025-06-03 15:55:56 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:56.066722 | orchestrator | 2025-06-03 15:55:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:59.116447 | orchestrator | 2025-06-03 15:55:59 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:55:59.118693 | orchestrator | 2025-06-03 15:55:59 | INFO  | Task 1a0eedd3-0d2f-4929-9ceb-04cf9541e1b4 is in state STARTED 2025-06-03 15:55:59.118818 | orchestrator | 2025-06-03 15:55:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:02.169407 | orchestrator | 2025-06-03 15:56:02 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:02.172898 | orchestrator | 2025-06-03 15:56:02 | INFO  | Task 1a0eedd3-0d2f-4929-9ceb-04cf9541e1b4 is in state STARTED 2025-06-03 15:56:02.172975 | orchestrator | 2025-06-03 15:56:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:05.232872 | orchestrator | 2025-06-03 15:56:05 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:05.232977 | orchestrator | 2025-06-03 15:56:05 | INFO  | Task 1a0eedd3-0d2f-4929-9ceb-04cf9541e1b4 is in state STARTED 2025-06-03 15:56:05.232989 | orchestrator | 2025-06-03 15:56:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:08.288692 | orchestrator | 2025-06-03 15:56:08 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:08.290976 | orchestrator | 2025-06-03 15:56:08 | INFO  | Task 1a0eedd3-0d2f-4929-9ceb-04cf9541e1b4 is in state STARTED 2025-06-03 15:56:08.291028 | orchestrator | 2025-06-03 15:56:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:11.350481 | orchestrator | 2025-06-03 15:56:11 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:11.352427 | orchestrator | 2025-06-03 15:56:11 | INFO  | Task 1a0eedd3-0d2f-4929-9ceb-04cf9541e1b4 is in state STARTED 2025-06-03 15:56:11.352689 | orchestrator | 2025-06-03 15:56:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:14.408498 | orchestrator | 2025-06-03 15:56:14 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:14.409996 | orchestrator | 2025-06-03 15:56:14 | INFO  | Task 1a0eedd3-0d2f-4929-9ceb-04cf9541e1b4 is in state STARTED 2025-06-03 15:56:14.410068 | orchestrator | 2025-06-03 15:56:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:17.461631 | orchestrator | 2025-06-03 15:56:17 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:17.464362 | orchestrator | 2025-06-03 15:56:17 | INFO  | Task 1a0eedd3-0d2f-4929-9ceb-04cf9541e1b4 is in state SUCCESS 2025-06-03 15:56:17.464429 | orchestrator | 2025-06-03 15:56:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:20.508202 | orchestrator | 2025-06-03 15:56:20 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:20.508295 | orchestrator | 2025-06-03 15:56:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:23.556389 | orchestrator | 2025-06-03 15:56:23 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:23.556459 | orchestrator | 2025-06-03 15:56:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:26.596722 | orchestrator | 2025-06-03 15:56:26 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:26.596810 | orchestrator | 2025-06-03 15:56:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:29.640845 | orchestrator | 2025-06-03 15:56:29 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:29.640947 | orchestrator | 2025-06-03 15:56:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:32.690443 | orchestrator | 2025-06-03 15:56:32 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:32.690589 | orchestrator | 2025-06-03 15:56:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:35.733358 | orchestrator | 2025-06-03 15:56:35 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:35.733459 | orchestrator | 2025-06-03 15:56:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:38.781309 | orchestrator | 2025-06-03 15:56:38 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:38.781424 | orchestrator | 2025-06-03 15:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:41.823654 | orchestrator | 2025-06-03 15:56:41 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:41.823750 | orchestrator | 2025-06-03 15:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:44.861708 | orchestrator | 2025-06-03 15:56:44 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:44.861798 | orchestrator | 2025-06-03 15:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:47.908928 | orchestrator | 2025-06-03 15:56:47 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:47.909093 | orchestrator | 2025-06-03 15:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:50.964375 | orchestrator | 2025-06-03 15:56:50 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:50.964445 | orchestrator | 2025-06-03 15:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:54.011276 | orchestrator | 2025-06-03 15:56:54 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:54.011392 | orchestrator | 2025-06-03 15:56:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:57.041916 | orchestrator | 2025-06-03 15:56:57 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:56:57.041987 | orchestrator | 2025-06-03 15:56:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:00.084358 | orchestrator | 2025-06-03 15:57:00 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:57:00.084438 | orchestrator | 2025-06-03 15:57:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:03.139936 | orchestrator | 2025-06-03 15:57:03 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:57:03.140029 | orchestrator | 2025-06-03 15:57:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:06.181721 | orchestrator | 2025-06-03 15:57:06 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:57:06.181816 | orchestrator | 2025-06-03 15:57:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:09.223615 | orchestrator | 2025-06-03 15:57:09 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:57:09.223906 | orchestrator | 2025-06-03 15:57:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:12.275548 | orchestrator | 2025-06-03 15:57:12 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:57:12.275659 | orchestrator | 2025-06-03 15:57:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:15.322622 | orchestrator | 2025-06-03 15:57:15 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state STARTED 2025-06-03 15:57:15.322729 | orchestrator | 2025-06-03 15:57:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:18.364766 | orchestrator | 2025-06-03 15:57:18 | INFO  | Task 3fe7cd9e-d7c0-4d82-a52d-30f87095bf1f is in state SUCCESS 2025-06-03 15:57:18.366969 | orchestrator | 2025-06-03 15:57:18.367040 | orchestrator | None 2025-06-03 15:57:18.367049 | orchestrator | 2025-06-03 15:57:18.367065 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:57:18.367073 | orchestrator | 2025-06-03 15:57:18.367080 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:57:18.367087 | orchestrator | Tuesday 03 June 2025 15:52:17 +0000 (0:00:00.274) 0:00:00.274 ********** 2025-06-03 15:57:18.367093 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:57:18.367122 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:57:18.367129 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:57:18.367136 | orchestrator | 2025-06-03 15:57:18.367162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:57:18.367169 | orchestrator | Tuesday 03 June 2025 15:52:17 +0000 (0:00:00.279) 0:00:00.553 ********** 2025-06-03 15:57:18.367175 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-03 15:57:18.367182 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-03 15:57:18.367188 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-03 15:57:18.367194 | orchestrator | 2025-06-03 15:57:18.367200 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-03 15:57:18.367207 | orchestrator | 2025-06-03 15:57:18.367213 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-03 15:57:18.367219 | orchestrator | Tuesday 03 June 2025 15:52:18 +0000 (0:00:00.402) 0:00:00.955 ********** 2025-06-03 15:57:18.367226 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:57:18.367242 | orchestrator | 2025-06-03 15:57:18.367249 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-03 15:57:18.367256 | orchestrator | Tuesday 03 June 2025 15:52:18 +0000 (0:00:00.491) 0:00:01.447 ********** 2025-06-03 15:57:18.367271 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-03 15:57:18.367277 | orchestrator | 2025-06-03 15:57:18.367283 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-03 15:57:18.367289 | orchestrator | Tuesday 03 June 2025 15:52:22 +0000 (0:00:03.820) 0:00:05.268 ********** 2025-06-03 15:57:18.367295 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-03 15:57:18.367302 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-03 15:57:18.367308 | orchestrator | 2025-06-03 15:57:18.367314 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-03 15:57:18.367320 | orchestrator | Tuesday 03 June 2025 15:52:30 +0000 (0:00:07.588) 0:00:12.856 ********** 2025-06-03 15:57:18.367327 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:57:18.367333 | orchestrator | 2025-06-03 15:57:18.367339 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-03 15:57:18.367345 | orchestrator | Tuesday 03 June 2025 15:52:33 +0000 (0:00:03.745) 0:00:16.602 ********** 2025-06-03 15:57:18.367365 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:57:18.367371 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-03 15:57:18.367378 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-03 15:57:18.367384 | orchestrator | 2025-06-03 15:57:18.367390 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-03 15:57:18.367396 | orchestrator | Tuesday 03 June 2025 15:52:43 +0000 (0:00:09.191) 0:00:25.793 ********** 2025-06-03 15:57:18.367402 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:57:18.367408 | orchestrator | 2025-06-03 15:57:18.367414 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-03 15:57:18.367421 | orchestrator | Tuesday 03 June 2025 15:52:47 +0000 (0:00:04.349) 0:00:30.142 ********** 2025-06-03 15:57:18.367427 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-03 15:57:18.367433 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-03 15:57:18.367439 | orchestrator | 2025-06-03 15:57:18.367445 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-03 15:57:18.367451 | orchestrator | Tuesday 03 June 2025 15:52:55 +0000 (0:00:08.521) 0:00:38.664 ********** 2025-06-03 15:57:18.367457 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-03 15:57:18.367463 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-03 15:57:18.367475 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-03 15:57:18.367481 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-03 15:57:18.367488 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-03 15:57:18.367540 | orchestrator | 2025-06-03 15:57:18.367549 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-03 15:57:18.367557 | orchestrator | Tuesday 03 June 2025 15:53:13 +0000 (0:00:17.370) 0:00:56.035 ********** 2025-06-03 15:57:18.367564 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:57:18.367570 | orchestrator | 2025-06-03 15:57:18.367577 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-03 15:57:18.367584 | orchestrator | Tuesday 03 June 2025 15:53:14 +0000 (0:00:01.449) 0:00:57.484 ********** 2025-06-03 15:57:18.367591 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.367598 | orchestrator | 2025-06-03 15:57:18.367606 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-06-03 15:57:18.367613 | orchestrator | Tuesday 03 June 2025 15:53:20 +0000 (0:00:05.945) 0:01:03.430 ********** 2025-06-03 15:57:18.367628 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.367635 | orchestrator | 2025-06-03 15:57:18.367782 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-03 15:57:18.367810 | orchestrator | Tuesday 03 June 2025 15:53:24 +0000 (0:00:04.092) 0:01:07.522 ********** 2025-06-03 15:57:18.367821 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:57:18.367831 | orchestrator | 2025-06-03 15:57:18.367847 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-06-03 15:57:18.367857 | orchestrator | Tuesday 03 June 2025 15:53:28 +0000 (0:00:03.476) 0:01:10.999 ********** 2025-06-03 15:57:18.368236 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-03 15:57:18.368248 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-03 15:57:18.368254 | orchestrator | 2025-06-03 15:57:18.368260 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-06-03 15:57:18.368267 | orchestrator | Tuesday 03 June 2025 15:53:40 +0000 (0:00:11.956) 0:01:22.956 ********** 2025-06-03 15:57:18.368274 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-06-03 15:57:18.368280 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-06-03 15:57:18.368289 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-06-03 15:57:18.368296 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-06-03 15:57:18.368303 | orchestrator | 2025-06-03 15:57:18.368309 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-06-03 15:57:18.368315 | orchestrator | Tuesday 03 June 2025 15:54:00 +0000 (0:00:20.061) 0:01:43.018 ********** 2025-06-03 15:57:18.368321 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.368328 | orchestrator | 2025-06-03 15:57:18.368334 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-06-03 15:57:18.368340 | orchestrator | Tuesday 03 June 2025 15:54:05 +0000 (0:00:05.076) 0:01:48.094 ********** 2025-06-03 15:57:18.368346 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.368352 | orchestrator | 2025-06-03 15:57:18.368358 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-06-03 15:57:18.368364 | orchestrator | Tuesday 03 June 2025 15:54:11 +0000 (0:00:05.743) 0:01:53.838 ********** 2025-06-03 15:57:18.368370 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:57:18.368376 | orchestrator | 2025-06-03 15:57:18.368392 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-06-03 15:57:18.368399 | orchestrator | Tuesday 03 June 2025 15:54:11 +0000 (0:00:00.195) 0:01:54.033 ********** 2025-06-03 15:57:18.368405 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.368411 | orchestrator | 2025-06-03 15:57:18.368418 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-03 15:57:18.368431 | orchestrator | Tuesday 03 June 2025 15:54:16 +0000 (0:00:05.530) 0:01:59.564 ********** 2025-06-03 15:57:18.368438 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:57:18.368444 | orchestrator | 2025-06-03 15:57:18.368450 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-06-03 15:57:18.368457 | orchestrator | Tuesday 03 June 2025 15:54:17 +0000 (0:00:01.043) 0:02:00.608 ********** 2025-06-03 15:57:18.368463 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.368469 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.368475 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.368482 | orchestrator | 2025-06-03 15:57:18.368488 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-06-03 15:57:18.368519 | orchestrator | Tuesday 03 June 2025 15:54:23 +0000 (0:00:05.773) 0:02:06.381 ********** 2025-06-03 15:57:18.368526 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.368532 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.368539 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.368545 | orchestrator | 2025-06-03 15:57:18.368551 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-06-03 15:57:18.368557 | orchestrator | Tuesday 03 June 2025 15:54:28 +0000 (0:00:05.057) 0:02:11.438 ********** 2025-06-03 15:57:18.368563 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.368570 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.368576 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.368582 | orchestrator | 2025-06-03 15:57:18.368588 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-06-03 15:57:18.368594 | orchestrator | Tuesday 03 June 2025 15:54:29 +0000 (0:00:00.799) 0:02:12.238 ********** 2025-06-03 15:57:18.368600 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:57:18.368607 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:57:18.368613 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:57:18.368619 | orchestrator | 2025-06-03 15:57:18.368694 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-06-03 15:57:18.368701 | orchestrator | Tuesday 03 June 2025 15:54:31 +0000 (0:00:02.277) 0:02:14.516 ********** 2025-06-03 15:57:18.368707 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.368714 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.368720 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.368726 | orchestrator | 2025-06-03 15:57:18.368732 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-06-03 15:57:18.368738 | orchestrator | Tuesday 03 June 2025 15:54:33 +0000 (0:00:01.308) 0:02:15.824 ********** 2025-06-03 15:57:18.368745 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.368751 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.368757 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.368798 | orchestrator | 2025-06-03 15:57:18.368806 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-06-03 15:57:18.368813 | orchestrator | Tuesday 03 June 2025 15:54:34 +0000 (0:00:01.227) 0:02:17.051 ********** 2025-06-03 15:57:18.368820 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.368827 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.369020 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.369035 | orchestrator | 2025-06-03 15:57:18.369063 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-06-03 15:57:18.369071 | orchestrator | Tuesday 03 June 2025 15:54:36 +0000 (0:00:02.118) 0:02:19.170 ********** 2025-06-03 15:57:18.369077 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.369092 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.369099 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.369105 | orchestrator | 2025-06-03 15:57:18.369111 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-06-03 15:57:18.369117 | orchestrator | Tuesday 03 June 2025 15:54:38 +0000 (0:00:01.853) 0:02:21.024 ********** 2025-06-03 15:57:18.369124 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:57:18.369130 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:57:18.369136 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:57:18.369142 | orchestrator | 2025-06-03 15:57:18.369148 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-06-03 15:57:18.369158 | orchestrator | Tuesday 03 June 2025 15:54:38 +0000 (0:00:00.653) 0:02:21.677 ********** 2025-06-03 15:57:18.369168 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:57:18.369178 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:57:18.369188 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:57:18.369199 | orchestrator | 2025-06-03 15:57:18.369210 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-03 15:57:18.369222 | orchestrator | Tuesday 03 June 2025 15:54:42 +0000 (0:00:03.088) 0:02:24.766 ********** 2025-06-03 15:57:18.369234 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:57:18.369245 | orchestrator | 2025-06-03 15:57:18.369257 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-06-03 15:57:18.369264 | orchestrator | Tuesday 03 June 2025 15:54:42 +0000 (0:00:00.734) 0:02:25.500 ********** 2025-06-03 15:57:18.369271 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:57:18.369277 | orchestrator | 2025-06-03 15:57:18.369283 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-03 15:57:18.369289 | orchestrator | Tuesday 03 June 2025 15:54:46 +0000 (0:00:03.999) 0:02:29.500 ********** 2025-06-03 15:57:18.369295 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:57:18.369301 | orchestrator | 2025-06-03 15:57:18.369307 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-06-03 15:57:18.369314 | orchestrator | Tuesday 03 June 2025 15:54:50 +0000 (0:00:03.652) 0:02:33.153 ********** 2025-06-03 15:57:18.369320 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-03 15:57:18.369326 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-03 15:57:18.369332 | orchestrator | 2025-06-03 15:57:18.369338 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-06-03 15:57:18.369345 | orchestrator | Tuesday 03 June 2025 15:54:58 +0000 (0:00:08.019) 0:02:41.172 ********** 2025-06-03 15:57:18.369351 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:57:18.369357 | orchestrator | 2025-06-03 15:57:18.369363 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-06-03 15:57:18.369376 | orchestrator | Tuesday 03 June 2025 15:55:02 +0000 (0:00:03.615) 0:02:44.788 ********** 2025-06-03 15:57:18.369382 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:57:18.369388 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:57:18.369394 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:57:18.369400 | orchestrator | 2025-06-03 15:57:18.369407 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-06-03 15:57:18.369413 | orchestrator | Tuesday 03 June 2025 15:55:02 +0000 (0:00:00.362) 0:02:45.150 ********** 2025-06-03 15:57:18.369422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.369460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.369467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.369475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.369487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.369521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.369530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.369544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.369571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.369580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.369587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.369597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.369605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.369616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.369638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.369645 | orchestrator | 2025-06-03 15:57:18.369652 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-06-03 15:57:18.369658 | orchestrator | Tuesday 03 June 2025 15:55:05 +0000 (0:00:02.772) 0:02:47.922 ********** 2025-06-03 15:57:18.369666 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:57:18.369673 | orchestrator | 2025-06-03 15:57:18.369680 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-06-03 15:57:18.369687 | orchestrator | Tuesday 03 June 2025 15:55:05 +0000 (0:00:00.354) 0:02:48.276 ********** 2025-06-03 15:57:18.369694 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:57:18.369701 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:57:18.369708 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:57:18.369715 | orchestrator | 2025-06-03 15:57:18.369727 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-06-03 15:57:18.369742 | orchestrator | Tuesday 03 June 2025 15:55:05 +0000 (0:00:00.290) 0:02:48.567 ********** 2025-06-03 15:57:18.369757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:57:18.369773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:57:18.369791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.369803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.369813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:57:18.369825 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:57:18.369865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:57:18.369878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:57:18.369898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.369914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.369930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:57:18.369944 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:57:18.369981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:57:18.369994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:57:18.370006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:57:18.370126 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:57:18.370132 | orchestrator | 2025-06-03 15:57:18.370138 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-03 15:57:18.370145 | orchestrator | Tuesday 03 June 2025 15:55:06 +0000 (0:00:00.670) 0:02:49.237 ********** 2025-06-03 15:57:18.370151 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:57:18.370158 | orchestrator | 2025-06-03 15:57:18.370164 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-06-03 15:57:18.370170 | orchestrator | Tuesday 03 June 2025 15:55:07 +0000 (0:00:00.559) 0:02:49.796 ********** 2025-06-03 15:57:18.370177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.370208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value'2025-06-03 15:57:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:18.370222 | orchestrator | : {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.370232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.370253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.370266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.370277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.370311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370389 | orchestrator | 2025-06-03 15:57:18.370396 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-06-03 15:57:18.370408 | orchestrator | Tuesday 03 June 2025 15:55:12 +0000 (0:00:05.253) 0:02:55.050 ********** 2025-06-03 15:57:18.370415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:57:18.370425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:57:18.370432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:57:18.370458 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:57:18.370465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:57:18.370476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:57:18.370486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:57:18.370563 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:57:18.370577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:57:18.370590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:57:18.370597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:57:18.370620 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:57:18.370627 | orchestrator | 2025-06-03 15:57:18.370633 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-06-03 15:57:18.370639 | orchestrator | Tuesday 03 June 2025 15:55:13 +0000 (0:00:00.639) 0:02:55.689 ********** 2025-06-03 15:57:18.370650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:57:18.370657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:57:18.370668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:57:18.370694 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:57:18.370700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:57:18.370707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:57:18.370720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:57:18.370745 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:57:18.370754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:57:18.370761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:57:18.370768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:57:18.370792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:57:18.370798 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:57:18.370805 | orchestrator | 2025-06-03 15:57:18.370811 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-06-03 15:57:18.370817 | orchestrator | Tuesday 03 June 2025 15:55:13 +0000 (0:00:00.754) 0:02:56.443 ********** 2025-06-03 15:57:18.370827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.370834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.370841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.370855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.370862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.370869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.370879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.370948 | orchestrator | 2025-06-03 15:57:18.370955 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-06-03 15:57:18.370961 | orchestrator | Tuesday 03 June 2025 15:55:19 +0000 (0:00:05.247) 0:03:01.691 ********** 2025-06-03 15:57:18.370968 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-03 15:57:18.370979 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-03 15:57:18.370985 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-03 15:57:18.370992 | orchestrator | 2025-06-03 15:57:18.370998 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-06-03 15:57:18.371004 | orchestrator | Tuesday 03 June 2025 15:55:20 +0000 (0:00:01.594) 0:03:03.285 ********** 2025-06-03 15:57:18.371015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.371022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.371032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.371039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.371045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.371056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.371067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371133 | orchestrator | 2025-06-03 15:57:18.371138 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-06-03 15:57:18.371144 | orchestrator | Tuesday 03 June 2025 15:55:37 +0000 (0:00:17.253) 0:03:20.539 ********** 2025-06-03 15:57:18.371150 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.371155 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.371161 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.371166 | orchestrator | 2025-06-03 15:57:18.371171 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-06-03 15:57:18.371180 | orchestrator | Tuesday 03 June 2025 15:55:39 +0000 (0:00:01.491) 0:03:22.031 ********** 2025-06-03 15:57:18.371188 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-03 15:57:18.371197 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-03 15:57:18.371206 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-03 15:57:18.371214 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-03 15:57:18.371227 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-03 15:57:18.371235 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-03 15:57:18.371243 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-03 15:57:18.371260 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-03 15:57:18.371268 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-03 15:57:18.371277 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-03 15:57:18.371285 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-03 15:57:18.371294 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-03 15:57:18.371302 | orchestrator | 2025-06-03 15:57:18.371311 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-06-03 15:57:18.371321 | orchestrator | Tuesday 03 June 2025 15:55:44 +0000 (0:00:05.520) 0:03:27.551 ********** 2025-06-03 15:57:18.371330 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-03 15:57:18.371339 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-03 15:57:18.371347 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-03 15:57:18.371356 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-03 15:57:18.371364 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-03 15:57:18.371370 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-03 15:57:18.371375 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-03 15:57:18.371380 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-03 15:57:18.371386 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-03 15:57:18.371391 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-03 15:57:18.371396 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-03 15:57:18.371402 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-03 15:57:18.371407 | orchestrator | 2025-06-03 15:57:18.371413 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-06-03 15:57:18.371418 | orchestrator | Tuesday 03 June 2025 15:55:49 +0000 (0:00:05.109) 0:03:32.661 ********** 2025-06-03 15:57:18.371423 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-03 15:57:18.371429 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-03 15:57:18.371434 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-03 15:57:18.371439 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-03 15:57:18.371445 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-03 15:57:18.371450 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-03 15:57:18.371460 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-03 15:57:18.371466 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-03 15:57:18.371471 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-03 15:57:18.371476 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-03 15:57:18.371481 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-03 15:57:18.371487 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-03 15:57:18.371510 | orchestrator | 2025-06-03 15:57:18.371516 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-06-03 15:57:18.371522 | orchestrator | Tuesday 03 June 2025 15:55:55 +0000 (0:00:05.261) 0:03:37.922 ********** 2025-06-03 15:57:18.371528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.371543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.371549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:57:18.371554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.371565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.371571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-03 15:57:18.371577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:57:18.371648 | orchestrator | 2025-06-03 15:57:18.371654 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-03 15:57:18.371659 | orchestrator | Tuesday 03 June 2025 15:55:59 +0000 (0:00:03.906) 0:03:41.829 ********** 2025-06-03 15:57:18.371665 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:57:18.371670 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:57:18.371675 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:57:18.371681 | orchestrator | 2025-06-03 15:57:18.371686 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-06-03 15:57:18.371692 | orchestrator | Tuesday 03 June 2025 15:55:59 +0000 (0:00:00.313) 0:03:42.142 ********** 2025-06-03 15:57:18.371697 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.371703 | orchestrator | 2025-06-03 15:57:18.371708 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-06-03 15:57:18.371714 | orchestrator | Tuesday 03 June 2025 15:56:01 +0000 (0:00:02.367) 0:03:44.510 ********** 2025-06-03 15:57:18.371719 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.371724 | orchestrator | 2025-06-03 15:57:18.371730 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-06-03 15:57:18.371735 | orchestrator | Tuesday 03 June 2025 15:56:04 +0000 (0:00:02.819) 0:03:47.329 ********** 2025-06-03 15:57:18.371740 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.371746 | orchestrator | 2025-06-03 15:57:18.371751 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-06-03 15:57:18.371757 | orchestrator | Tuesday 03 June 2025 15:56:06 +0000 (0:00:02.335) 0:03:49.665 ********** 2025-06-03 15:57:18.371763 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.371768 | orchestrator | 2025-06-03 15:57:18.371774 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-06-03 15:57:18.371779 | orchestrator | Tuesday 03 June 2025 15:56:09 +0000 (0:00:02.396) 0:03:52.062 ********** 2025-06-03 15:57:18.371784 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.371790 | orchestrator | 2025-06-03 15:57:18.371795 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-03 15:57:18.371801 | orchestrator | Tuesday 03 June 2025 15:56:31 +0000 (0:00:22.318) 0:04:14.380 ********** 2025-06-03 15:57:18.371806 | orchestrator | 2025-06-03 15:57:18.371811 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-03 15:57:18.371821 | orchestrator | Tuesday 03 June 2025 15:56:31 +0000 (0:00:00.072) 0:04:14.452 ********** 2025-06-03 15:57:18.371826 | orchestrator | 2025-06-03 15:57:18.371832 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-03 15:57:18.371841 | orchestrator | Tuesday 03 June 2025 15:56:31 +0000 (0:00:00.065) 0:04:14.518 ********** 2025-06-03 15:57:18.371847 | orchestrator | 2025-06-03 15:57:18.371852 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-06-03 15:57:18.371857 | orchestrator | Tuesday 03 June 2025 15:56:31 +0000 (0:00:00.073) 0:04:14.591 ********** 2025-06-03 15:57:18.371863 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.371868 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.371874 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.371879 | orchestrator | 2025-06-03 15:57:18.371884 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-06-03 15:57:18.371890 | orchestrator | Tuesday 03 June 2025 15:56:48 +0000 (0:00:16.534) 0:04:31.126 ********** 2025-06-03 15:57:18.371895 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.371901 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.371906 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.371911 | orchestrator | 2025-06-03 15:57:18.371917 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-06-03 15:57:18.371922 | orchestrator | Tuesday 03 June 2025 15:56:55 +0000 (0:00:06.688) 0:04:37.815 ********** 2025-06-03 15:57:18.371927 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.371933 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.371938 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.371943 | orchestrator | 2025-06-03 15:57:18.371949 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-06-03 15:57:18.371954 | orchestrator | Tuesday 03 June 2025 15:57:00 +0000 (0:00:05.602) 0:04:43.417 ********** 2025-06-03 15:57:18.371959 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.371965 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.371970 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.371975 | orchestrator | 2025-06-03 15:57:18.371980 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-06-03 15:57:18.371986 | orchestrator | Tuesday 03 June 2025 15:57:08 +0000 (0:00:08.253) 0:04:51.671 ********** 2025-06-03 15:57:18.371991 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:57:18.371996 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:57:18.372002 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:57:18.372007 | orchestrator | 2025-06-03 15:57:18.372012 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:57:18.372019 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:57:18.372025 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:57:18.372030 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:57:18.372036 | orchestrator | 2025-06-03 15:57:18.372041 | orchestrator | 2025-06-03 15:57:18.372046 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:57:18.372055 | orchestrator | Tuesday 03 June 2025 15:57:17 +0000 (0:00:08.643) 0:05:00.315 ********** 2025-06-03 15:57:18.372061 | orchestrator | =============================================================================== 2025-06-03 15:57:18.372066 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.32s 2025-06-03 15:57:18.372071 | orchestrator | octavia : Add rules for security groups -------------------------------- 20.06s 2025-06-03 15:57:18.372077 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.37s 2025-06-03 15:57:18.372082 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.25s 2025-06-03 15:57:18.372092 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.53s 2025-06-03 15:57:18.372098 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.96s 2025-06-03 15:57:18.372103 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.19s 2025-06-03 15:57:18.372108 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.64s 2025-06-03 15:57:18.372114 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.52s 2025-06-03 15:57:18.372119 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.25s 2025-06-03 15:57:18.372124 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.02s 2025-06-03 15:57:18.372130 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.59s 2025-06-03 15:57:18.372135 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.69s 2025-06-03 15:57:18.372140 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.95s 2025-06-03 15:57:18.372145 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.77s 2025-06-03 15:57:18.372151 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.74s 2025-06-03 15:57:18.372156 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.60s 2025-06-03 15:57:18.372161 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.53s 2025-06-03 15:57:18.372167 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.52s 2025-06-03 15:57:18.372172 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.26s 2025-06-03 15:57:21.411820 | orchestrator | 2025-06-03 15:57:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:24.453536 | orchestrator | 2025-06-03 15:57:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:27.492955 | orchestrator | 2025-06-03 15:57:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:30.537681 | orchestrator | 2025-06-03 15:57:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:33.578724 | orchestrator | 2025-06-03 15:57:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:36.620691 | orchestrator | 2025-06-03 15:57:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:39.662210 | orchestrator | 2025-06-03 15:57:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:42.705677 | orchestrator | 2025-06-03 15:57:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:45.750640 | orchestrator | 2025-06-03 15:57:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:48.789432 | orchestrator | 2025-06-03 15:57:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:51.828353 | orchestrator | 2025-06-03 15:57:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:54.870572 | orchestrator | 2025-06-03 15:57:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:57:57.916213 | orchestrator | 2025-06-03 15:57:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:58:00.966879 | orchestrator | 2025-06-03 15:58:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:58:04.023398 | orchestrator | 2025-06-03 15:58:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:58:07.063937 | orchestrator | 2025-06-03 15:58:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:58:10.101766 | orchestrator | 2025-06-03 15:58:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:58:13.146313 | orchestrator | 2025-06-03 15:58:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:58:16.185476 | orchestrator | 2025-06-03 15:58:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:58:19.230202 | orchestrator | 2025-06-03 15:58:19.502222 | orchestrator | 2025-06-03 15:58:19.508201 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Jun 3 15:58:19 UTC 2025 2025-06-03 15:58:19.508272 | orchestrator | 2025-06-03 15:58:19.869962 | orchestrator | ok: Runtime: 0:34:40.510836 2025-06-03 15:58:20.150748 | 2025-06-03 15:58:20.150927 | TASK [Bootstrap services] 2025-06-03 15:58:20.960289 | orchestrator | 2025-06-03 15:58:20.960538 | orchestrator | # BOOTSTRAP 2025-06-03 15:58:20.960573 | orchestrator | 2025-06-03 15:58:20.960595 | orchestrator | + set -e 2025-06-03 15:58:20.960615 | orchestrator | + echo 2025-06-03 15:58:20.960635 | orchestrator | + echo '# BOOTSTRAP' 2025-06-03 15:58:20.960660 | orchestrator | + echo 2025-06-03 15:58:20.960719 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-03 15:58:20.966599 | orchestrator | + set -e 2025-06-03 15:58:20.966738 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-03 15:58:25.326636 | orchestrator | 2025-06-03 15:58:25 | INFO  | It takes a moment until task 5c19516c-3f1a-4501-84a8-81b2bc0a5b32 (flavor-manager) has been started and output is visible here. 2025-06-03 15:58:29.412615 | orchestrator | 2025-06-03 15:58:29 | INFO  | Flavor SCS-1V-4 created 2025-06-03 15:58:29.607087 | orchestrator | 2025-06-03 15:58:29 | INFO  | Flavor SCS-2V-8 created 2025-06-03 15:58:29.781640 | orchestrator | 2025-06-03 15:58:29 | INFO  | Flavor SCS-4V-16 created 2025-06-03 15:58:29.944484 | orchestrator | 2025-06-03 15:58:29 | INFO  | Flavor SCS-8V-32 created 2025-06-03 15:58:30.073607 | orchestrator | 2025-06-03 15:58:30 | INFO  | Flavor SCS-1V-2 created 2025-06-03 15:58:30.183151 | orchestrator | 2025-06-03 15:58:30 | INFO  | Flavor SCS-2V-4 created 2025-06-03 15:58:30.335451 | orchestrator | 2025-06-03 15:58:30 | INFO  | Flavor SCS-4V-8 created 2025-06-03 15:58:30.460848 | orchestrator | 2025-06-03 15:58:30 | INFO  | Flavor SCS-8V-16 created 2025-06-03 15:58:30.582675 | orchestrator | 2025-06-03 15:58:30 | INFO  | Flavor SCS-16V-32 created 2025-06-03 15:58:30.759379 | orchestrator | 2025-06-03 15:58:30 | INFO  | Flavor SCS-1V-8 created 2025-06-03 15:58:30.893942 | orchestrator | 2025-06-03 15:58:30 | INFO  | Flavor SCS-2V-16 created 2025-06-03 15:58:31.054796 | orchestrator | 2025-06-03 15:58:31 | INFO  | Flavor SCS-4V-32 created 2025-06-03 15:58:31.204233 | orchestrator | 2025-06-03 15:58:31 | INFO  | Flavor SCS-1L-1 created 2025-06-03 15:58:31.346858 | orchestrator | 2025-06-03 15:58:31 | INFO  | Flavor SCS-2V-4-20s created 2025-06-03 15:58:31.491190 | orchestrator | 2025-06-03 15:58:31 | INFO  | Flavor SCS-4V-16-100s created 2025-06-03 15:58:31.636387 | orchestrator | 2025-06-03 15:58:31 | INFO  | Flavor SCS-1V-4-10 created 2025-06-03 15:58:31.773890 | orchestrator | 2025-06-03 15:58:31 | INFO  | Flavor SCS-2V-8-20 created 2025-06-03 15:58:31.921518 | orchestrator | 2025-06-03 15:58:31 | INFO  | Flavor SCS-4V-16-50 created 2025-06-03 15:58:32.064550 | orchestrator | 2025-06-03 15:58:32 | INFO  | Flavor SCS-8V-32-100 created 2025-06-03 15:58:32.195758 | orchestrator | 2025-06-03 15:58:32 | INFO  | Flavor SCS-1V-2-5 created 2025-06-03 15:58:32.351826 | orchestrator | 2025-06-03 15:58:32 | INFO  | Flavor SCS-2V-4-10 created 2025-06-03 15:58:32.478118 | orchestrator | 2025-06-03 15:58:32 | INFO  | Flavor SCS-4V-8-20 created 2025-06-03 15:58:32.621200 | orchestrator | 2025-06-03 15:58:32 | INFO  | Flavor SCS-8V-16-50 created 2025-06-03 15:58:32.776991 | orchestrator | 2025-06-03 15:58:32 | INFO  | Flavor SCS-16V-32-100 created 2025-06-03 15:58:32.940096 | orchestrator | 2025-06-03 15:58:32 | INFO  | Flavor SCS-1V-8-20 created 2025-06-03 15:58:33.095582 | orchestrator | 2025-06-03 15:58:33 | INFO  | Flavor SCS-2V-16-50 created 2025-06-03 15:58:33.223979 | orchestrator | 2025-06-03 15:58:33 | INFO  | Flavor SCS-4V-32-100 created 2025-06-03 15:58:33.392256 | orchestrator | 2025-06-03 15:58:33 | INFO  | Flavor SCS-1L-1-5 created 2025-06-03 15:58:35.678601 | orchestrator | 2025-06-03 15:58:35 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-03 15:58:35.683125 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:58:35.683185 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:58:35.683213 | orchestrator | Registering Redlock._release_script 2025-06-03 15:58:35.744432 | orchestrator | 2025-06-03 15:58:35 | INFO  | Task 65eb48ea-8e67-4cb1-b679-0fb578daee58 (bootstrap-basic) was prepared for execution. 2025-06-03 15:58:35.744526 | orchestrator | 2025-06-03 15:58:35 | INFO  | It takes a moment until task 65eb48ea-8e67-4cb1-b679-0fb578daee58 (bootstrap-basic) has been started and output is visible here. 2025-06-03 15:58:40.017237 | orchestrator | 2025-06-03 15:58:40.019443 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-03 15:58:40.020050 | orchestrator | 2025-06-03 15:58:40.022487 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:58:40.023184 | orchestrator | Tuesday 03 June 2025 15:58:40 +0000 (0:00:00.077) 0:00:00.077 ********** 2025-06-03 15:58:41.987638 | orchestrator | ok: [localhost] 2025-06-03 15:58:41.987724 | orchestrator | 2025-06-03 15:58:41.988615 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-03 15:58:41.989450 | orchestrator | Tuesday 03 June 2025 15:58:41 +0000 (0:00:01.974) 0:00:02.052 ********** 2025-06-03 15:58:50.145794 | orchestrator | ok: [localhost] 2025-06-03 15:58:50.147426 | orchestrator | 2025-06-03 15:58:50.147469 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-03 15:58:50.147475 | orchestrator | Tuesday 03 June 2025 15:58:50 +0000 (0:00:08.157) 0:00:10.209 ********** 2025-06-03 15:58:57.152300 | orchestrator | changed: [localhost] 2025-06-03 15:58:57.153261 | orchestrator | 2025-06-03 15:58:57.154119 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-03 15:58:57.154350 | orchestrator | Tuesday 03 June 2025 15:58:57 +0000 (0:00:07.006) 0:00:17.216 ********** 2025-06-03 15:59:03.506937 | orchestrator | ok: [localhost] 2025-06-03 15:59:03.507285 | orchestrator | 2025-06-03 15:59:03.509082 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-03 15:59:03.510189 | orchestrator | Tuesday 03 June 2025 15:59:03 +0000 (0:00:06.353) 0:00:23.569 ********** 2025-06-03 15:59:10.059555 | orchestrator | changed: [localhost] 2025-06-03 15:59:10.059638 | orchestrator | 2025-06-03 15:59:10.060316 | orchestrator | TASK [Create public network] *************************************************** 2025-06-03 15:59:10.061259 | orchestrator | Tuesday 03 June 2025 15:59:10 +0000 (0:00:06.551) 0:00:30.120 ********** 2025-06-03 15:59:15.375720 | orchestrator | changed: [localhost] 2025-06-03 15:59:15.378093 | orchestrator | 2025-06-03 15:59:15.385185 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-03 15:59:15.385831 | orchestrator | Tuesday 03 June 2025 15:59:15 +0000 (0:00:05.311) 0:00:35.431 ********** 2025-06-03 15:59:21.523196 | orchestrator | changed: [localhost] 2025-06-03 15:59:21.523949 | orchestrator | 2025-06-03 15:59:21.524740 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-03 15:59:21.525619 | orchestrator | Tuesday 03 June 2025 15:59:21 +0000 (0:00:06.153) 0:00:41.585 ********** 2025-06-03 15:59:26.130005 | orchestrator | changed: [localhost] 2025-06-03 15:59:26.130463 | orchestrator | 2025-06-03 15:59:26.132164 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-03 15:59:26.133609 | orchestrator | Tuesday 03 June 2025 15:59:26 +0000 (0:00:04.607) 0:00:46.192 ********** 2025-06-03 15:59:29.962433 | orchestrator | changed: [localhost] 2025-06-03 15:59:29.962764 | orchestrator | 2025-06-03 15:59:29.963918 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-03 15:59:29.965136 | orchestrator | Tuesday 03 June 2025 15:59:29 +0000 (0:00:03.830) 0:00:50.023 ********** 2025-06-03 15:59:33.761746 | orchestrator | ok: [localhost] 2025-06-03 15:59:33.761839 | orchestrator | 2025-06-03 15:59:33.763677 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:59:33.763729 | orchestrator | 2025-06-03 15:59:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:59:33.763739 | orchestrator | 2025-06-03 15:59:33 | INFO  | Please wait and do not abort execution. 2025-06-03 15:59:33.764497 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:59:33.765179 | orchestrator | 2025-06-03 15:59:33.765245 | orchestrator | 2025-06-03 15:59:33.765734 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:59:33.765814 | orchestrator | Tuesday 03 June 2025 15:59:33 +0000 (0:00:03.798) 0:00:53.822 ********** 2025-06-03 15:59:33.766488 | orchestrator | =============================================================================== 2025-06-03 15:59:33.766563 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.16s 2025-06-03 15:59:33.767004 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.01s 2025-06-03 15:59:33.767214 | orchestrator | Create volume type local ------------------------------------------------ 6.55s 2025-06-03 15:59:33.767616 | orchestrator | Get volume type local --------------------------------------------------- 6.35s 2025-06-03 15:59:33.768212 | orchestrator | Set public network to default ------------------------------------------- 6.15s 2025-06-03 15:59:33.768672 | orchestrator | Create public network --------------------------------------------------- 5.31s 2025-06-03 15:59:33.770280 | orchestrator | Create public subnet ---------------------------------------------------- 4.61s 2025-06-03 15:59:33.770305 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.83s 2025-06-03 15:59:33.770313 | orchestrator | Create manager role ----------------------------------------------------- 3.80s 2025-06-03 15:59:33.771076 | orchestrator | Gathering Facts --------------------------------------------------------- 1.97s 2025-06-03 15:59:36.249310 | orchestrator | 2025-06-03 15:59:36 | INFO  | It takes a moment until task aaaa52a0-9886-4f57-9792-ba4b05ce3f57 (image-manager) has been started and output is visible here. 2025-06-03 15:59:39.997227 | orchestrator | 2025-06-03 15:59:39 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-03 15:59:40.294581 | orchestrator | 2025-06-03 15:59:40 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-03 15:59:40.295457 | orchestrator | 2025-06-03 15:59:40 | INFO  | Importing image Cirros 0.6.2 2025-06-03 15:59:40.296246 | orchestrator | 2025-06-03 15:59:40 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-03 15:59:42.026862 | orchestrator | 2025-06-03 15:59:42 | INFO  | Waiting for image to leave queued state... 2025-06-03 15:59:44.063998 | orchestrator | 2025-06-03 15:59:44 | INFO  | Waiting for import to complete... 2025-06-03 15:59:54.363078 | orchestrator | 2025-06-03 15:59:54 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-03 15:59:54.827792 | orchestrator | 2025-06-03 15:59:54 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-03 15:59:54.828627 | orchestrator | 2025-06-03 15:59:54 | INFO  | Setting internal_version = 0.6.2 2025-06-03 15:59:54.829810 | orchestrator | 2025-06-03 15:59:54 | INFO  | Setting image_original_user = cirros 2025-06-03 15:59:54.830671 | orchestrator | 2025-06-03 15:59:54 | INFO  | Adding tag os:cirros 2025-06-03 15:59:55.128225 | orchestrator | 2025-06-03 15:59:55 | INFO  | Setting property architecture: x86_64 2025-06-03 15:59:55.455518 | orchestrator | 2025-06-03 15:59:55 | INFO  | Setting property hw_disk_bus: scsi 2025-06-03 15:59:55.690488 | orchestrator | 2025-06-03 15:59:55 | INFO  | Setting property hw_rng_model: virtio 2025-06-03 15:59:55.916356 | orchestrator | 2025-06-03 15:59:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-03 15:59:56.133635 | orchestrator | 2025-06-03 15:59:56 | INFO  | Setting property hw_watchdog_action: reset 2025-06-03 15:59:56.339157 | orchestrator | 2025-06-03 15:59:56 | INFO  | Setting property hypervisor_type: qemu 2025-06-03 15:59:56.562828 | orchestrator | 2025-06-03 15:59:56 | INFO  | Setting property os_distro: cirros 2025-06-03 15:59:56.784810 | orchestrator | 2025-06-03 15:59:56 | INFO  | Setting property replace_frequency: never 2025-06-03 15:59:57.014627 | orchestrator | 2025-06-03 15:59:57 | INFO  | Setting property uuid_validity: none 2025-06-03 15:59:57.219362 | orchestrator | 2025-06-03 15:59:57 | INFO  | Setting property provided_until: none 2025-06-03 15:59:57.448104 | orchestrator | 2025-06-03 15:59:57 | INFO  | Setting property image_description: Cirros 2025-06-03 15:59:57.687118 | orchestrator | 2025-06-03 15:59:57 | INFO  | Setting property image_name: Cirros 2025-06-03 15:59:57.870118 | orchestrator | 2025-06-03 15:59:57 | INFO  | Setting property internal_version: 0.6.2 2025-06-03 15:59:58.101155 | orchestrator | 2025-06-03 15:59:58 | INFO  | Setting property image_original_user: cirros 2025-06-03 15:59:58.327463 | orchestrator | 2025-06-03 15:59:58 | INFO  | Setting property os_version: 0.6.2 2025-06-03 15:59:58.533815 | orchestrator | 2025-06-03 15:59:58 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-03 15:59:58.748422 | orchestrator | 2025-06-03 15:59:58 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-03 15:59:58.939641 | orchestrator | 2025-06-03 15:59:58 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-03 15:59:58.939795 | orchestrator | 2025-06-03 15:59:58 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-03 15:59:58.940734 | orchestrator | 2025-06-03 15:59:58 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-03 15:59:59.142072 | orchestrator | 2025-06-03 15:59:59 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-03 15:59:59.379582 | orchestrator | 2025-06-03 15:59:59 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-03 15:59:59.380326 | orchestrator | 2025-06-03 15:59:59 | INFO  | Importing image Cirros 0.6.3 2025-06-03 15:59:59.382419 | orchestrator | 2025-06-03 15:59:59 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-03 16:00:00.583077 | orchestrator | 2025-06-03 16:00:00 | INFO  | Waiting for image to leave queued state... 2025-06-03 16:00:02.619247 | orchestrator | 2025-06-03 16:00:02 | INFO  | Waiting for import to complete... 2025-06-03 16:00:12.914286 | orchestrator | 2025-06-03 16:00:12 | INFO  | Waiting for import to complete... 2025-06-03 16:00:23.073477 | orchestrator | 2025-06-03 16:00:23 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-03 16:00:23.312881 | orchestrator | 2025-06-03 16:00:23 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-03 16:00:23.312993 | orchestrator | 2025-06-03 16:00:23 | INFO  | Setting internal_version = 0.6.3 2025-06-03 16:00:23.313480 | orchestrator | 2025-06-03 16:00:23 | INFO  | Setting image_original_user = cirros 2025-06-03 16:00:23.314225 | orchestrator | 2025-06-03 16:00:23 | INFO  | Adding tag os:cirros 2025-06-03 16:00:23.571839 | orchestrator | 2025-06-03 16:00:23 | INFO  | Setting property architecture: x86_64 2025-06-03 16:00:23.793067 | orchestrator | 2025-06-03 16:00:23 | INFO  | Setting property hw_disk_bus: scsi 2025-06-03 16:00:24.058294 | orchestrator | 2025-06-03 16:00:24 | INFO  | Setting property hw_rng_model: virtio 2025-06-03 16:00:24.264428 | orchestrator | 2025-06-03 16:00:24 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-03 16:00:24.484440 | orchestrator | 2025-06-03 16:00:24 | INFO  | Setting property hw_watchdog_action: reset 2025-06-03 16:00:24.721991 | orchestrator | 2025-06-03 16:00:24 | INFO  | Setting property hypervisor_type: qemu 2025-06-03 16:00:24.959960 | orchestrator | 2025-06-03 16:00:24 | INFO  | Setting property os_distro: cirros 2025-06-03 16:00:25.167538 | orchestrator | 2025-06-03 16:00:25 | INFO  | Setting property replace_frequency: never 2025-06-03 16:00:25.375866 | orchestrator | 2025-06-03 16:00:25 | INFO  | Setting property uuid_validity: none 2025-06-03 16:00:25.576180 | orchestrator | 2025-06-03 16:00:25 | INFO  | Setting property provided_until: none 2025-06-03 16:00:25.805626 | orchestrator | 2025-06-03 16:00:25 | INFO  | Setting property image_description: Cirros 2025-06-03 16:00:26.015548 | orchestrator | 2025-06-03 16:00:26 | INFO  | Setting property image_name: Cirros 2025-06-03 16:00:26.222239 | orchestrator | 2025-06-03 16:00:26 | INFO  | Setting property internal_version: 0.6.3 2025-06-03 16:00:26.414239 | orchestrator | 2025-06-03 16:00:26 | INFO  | Setting property image_original_user: cirros 2025-06-03 16:00:26.866864 | orchestrator | 2025-06-03 16:00:26 | INFO  | Setting property os_version: 0.6.3 2025-06-03 16:00:27.085816 | orchestrator | 2025-06-03 16:00:27 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-03 16:00:27.282608 | orchestrator | 2025-06-03 16:00:27 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-03 16:00:27.493022 | orchestrator | 2025-06-03 16:00:27 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-03 16:00:27.494152 | orchestrator | 2025-06-03 16:00:27 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-03 16:00:27.494747 | orchestrator | 2025-06-03 16:00:27 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-03 16:00:28.540317 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-03 16:00:30.433033 | orchestrator | 2025-06-03 16:00:30 | INFO  | date: 2025-06-03 2025-06-03 16:00:30.433117 | orchestrator | 2025-06-03 16:00:30 | INFO  | image: octavia-amphora-haproxy-2024.2.20250603.qcow2 2025-06-03 16:00:30.433236 | orchestrator | 2025-06-03 16:00:30 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250603.qcow2 2025-06-03 16:00:30.433261 | orchestrator | 2025-06-03 16:00:30 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250603.qcow2.CHECKSUM 2025-06-03 16:00:30.470454 | orchestrator | 2025-06-03 16:00:30 | INFO  | checksum: 7f57cebcf47e21267f186897438d3e2a516fb862e8a8c745c06679ffa81da60f 2025-06-03 16:00:30.559676 | orchestrator | 2025-06-03 16:00:30 | INFO  | It takes a moment until task 0058d714-bb94-4a91-ae1a-33171ede95ff (image-manager) has been started and output is visible here. 2025-06-03 16:00:30.781680 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-03 16:00:30.781897 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-03 16:00:32.445563 | orchestrator | 2025-06-03 16:00:32 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-03' 2025-06-03 16:00:32.459531 | orchestrator | 2025-06-03 16:00:32 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250603.qcow2: 200 2025-06-03 16:00:32.460166 | orchestrator | 2025-06-03 16:00:32 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-03 2025-06-03 16:00:32.461013 | orchestrator | 2025-06-03 16:00:32 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250603.qcow2 2025-06-03 16:00:32.860989 | orchestrator | 2025-06-03 16:00:32 | INFO  | Waiting for image to leave queued state... 2025-06-03 16:00:34.907933 | orchestrator | 2025-06-03 16:00:34 | INFO  | Waiting for import to complete... 2025-06-03 16:00:45.001664 | orchestrator | 2025-06-03 16:00:44 | INFO  | Waiting for import to complete... 2025-06-03 16:00:55.096838 | orchestrator | 2025-06-03 16:00:55 | INFO  | Waiting for import to complete... 2025-06-03 16:01:05.175122 | orchestrator | 2025-06-03 16:01:05 | INFO  | Waiting for import to complete... 2025-06-03 16:01:15.265549 | orchestrator | 2025-06-03 16:01:15 | INFO  | Waiting for import to complete... 2025-06-03 16:01:25.397690 | orchestrator | 2025-06-03 16:01:25 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-03' successfully completed, reloading images 2025-06-03 16:01:25.944795 | orchestrator | 2025-06-03 16:01:25 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-03' 2025-06-03 16:01:25.945162 | orchestrator | 2025-06-03 16:01:25 | INFO  | Setting internal_version = 2025-06-03 2025-06-03 16:01:25.946208 | orchestrator | 2025-06-03 16:01:25 | INFO  | Setting image_original_user = ubuntu 2025-06-03 16:01:25.947274 | orchestrator | 2025-06-03 16:01:25 | INFO  | Adding tag amphora 2025-06-03 16:01:26.197286 | orchestrator | 2025-06-03 16:01:26 | INFO  | Adding tag os:ubuntu 2025-06-03 16:01:26.428307 | orchestrator | 2025-06-03 16:01:26 | INFO  | Setting property architecture: x86_64 2025-06-03 16:01:26.719915 | orchestrator | 2025-06-03 16:01:26 | INFO  | Setting property hw_disk_bus: scsi 2025-06-03 16:01:26.941975 | orchestrator | 2025-06-03 16:01:26 | INFO  | Setting property hw_rng_model: virtio 2025-06-03 16:01:27.168465 | orchestrator | 2025-06-03 16:01:27 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-03 16:01:27.389508 | orchestrator | 2025-06-03 16:01:27 | INFO  | Setting property hw_watchdog_action: reset 2025-06-03 16:01:27.598322 | orchestrator | 2025-06-03 16:01:27 | INFO  | Setting property hypervisor_type: qemu 2025-06-03 16:01:27.820726 | orchestrator | 2025-06-03 16:01:27 | INFO  | Setting property os_distro: ubuntu 2025-06-03 16:01:28.066478 | orchestrator | 2025-06-03 16:01:28 | INFO  | Setting property replace_frequency: quarterly 2025-06-03 16:01:28.292763 | orchestrator | 2025-06-03 16:01:28 | INFO  | Setting property uuid_validity: last-1 2025-06-03 16:01:28.518174 | orchestrator | 2025-06-03 16:01:28 | INFO  | Setting property provided_until: none 2025-06-03 16:01:28.756830 | orchestrator | 2025-06-03 16:01:28 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-03 16:01:28.993803 | orchestrator | 2025-06-03 16:01:28 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-03 16:01:29.206514 | orchestrator | 2025-06-03 16:01:29 | INFO  | Setting property internal_version: 2025-06-03 2025-06-03 16:01:29.452233 | orchestrator | 2025-06-03 16:01:29 | INFO  | Setting property image_original_user: ubuntu 2025-06-03 16:01:29.691596 | orchestrator | 2025-06-03 16:01:29 | INFO  | Setting property os_version: 2025-06-03 2025-06-03 16:01:29.893314 | orchestrator | 2025-06-03 16:01:29 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250603.qcow2 2025-06-03 16:01:30.127970 | orchestrator | 2025-06-03 16:01:30 | INFO  | Setting property image_build_date: 2025-06-03 2025-06-03 16:01:30.354887 | orchestrator | 2025-06-03 16:01:30 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-03' 2025-06-03 16:01:30.356668 | orchestrator | 2025-06-03 16:01:30 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-03' 2025-06-03 16:01:30.517203 | orchestrator | 2025-06-03 16:01:30 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-03 16:01:30.517793 | orchestrator | 2025-06-03 16:01:30 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-03 16:01:30.519242 | orchestrator | 2025-06-03 16:01:30 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-03 16:01:30.520164 | orchestrator | 2025-06-03 16:01:30 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-03 16:01:31.335898 | orchestrator | ok: Runtime: 0:03:10.424657 2025-06-03 16:01:31.363226 | 2025-06-03 16:01:31.363412 | TASK [Run checks] 2025-06-03 16:01:32.121399 | orchestrator | + set -e 2025-06-03 16:01:32.121535 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 16:01:32.121546 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 16:01:32.121556 | orchestrator | ++ INTERACTIVE=false 2025-06-03 16:01:32.121561 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 16:01:32.121566 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 16:01:32.121572 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-03 16:01:32.121961 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-03 16:01:32.125320 | orchestrator | 2025-06-03 16:01:32.125379 | orchestrator | # CHECK 2025-06-03 16:01:32.125384 | orchestrator | 2025-06-03 16:01:32.125389 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-03 16:01:32.125397 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-03 16:01:32.125401 | orchestrator | + echo 2025-06-03 16:01:32.125405 | orchestrator | + echo '# CHECK' 2025-06-03 16:01:32.125409 | orchestrator | + echo 2025-06-03 16:01:32.125417 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-03 16:01:32.125836 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-03 16:01:32.182761 | orchestrator | 2025-06-03 16:01:32.182852 | orchestrator | ## Containers @ testbed-manager 2025-06-03 16:01:32.182863 | orchestrator | 2025-06-03 16:01:32.182872 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-03 16:01:32.182880 | orchestrator | + echo 2025-06-03 16:01:32.182887 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-03 16:01:32.182895 | orchestrator | + echo 2025-06-03 16:01:32.182902 | orchestrator | + osism container testbed-manager ps 2025-06-03 16:01:34.303943 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-03 16:01:34.304035 | orchestrator | 932440b17ab4 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-06-03 16:01:34.304046 | orchestrator | 3cf43a68f9fd registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-06-03 16:01:34.304057 | orchestrator | 5082536a0716 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-03 16:01:34.304061 | orchestrator | 473cd2091649 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-03 16:01:34.304066 | orchestrator | abdd2792f587 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-06-03 16:01:34.304071 | orchestrator | c20b8ba36105 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 17 minutes cephclient 2025-06-03 16:01:34.304079 | orchestrator | 2ca5023692a1 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-03 16:01:34.304084 | orchestrator | 2d01c571edf2 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-06-03 16:01:34.304088 | orchestrator | 35374a2c4ea3 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-03 16:01:34.304121 | orchestrator | 4426102c643e phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-06-03 16:01:34.304126 | orchestrator | b773c4c1ee66 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 31 minutes openstackclient 2025-06-03 16:01:34.304131 | orchestrator | da86aaa0d765 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 32 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-06-03 16:01:34.304136 | orchestrator | 27e3d45533b4 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-03 16:01:34.304142 | orchestrator | 030a2f5d4003 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 56 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2025-06-03 16:01:34.304147 | orchestrator | 04c25788602a registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-06-03 16:01:34.304151 | orchestrator | f1ec1ace37fe registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-06-03 16:01:34.304155 | orchestrator | 3074baaee967 registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 56 minutes ago Up 38 minutes (healthy) osism-ansible 2025-06-03 16:01:34.304160 | orchestrator | 78b74531a2c7 registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-06-03 16:01:34.304164 | orchestrator | 2c4196c772eb registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 56 minutes ago Up 38 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-03 16:01:34.304168 | orchestrator | ab264c51c292 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-openstack-1 2025-06-03 16:01:34.304172 | orchestrator | 802a36d533d0 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-03 16:01:34.304177 | orchestrator | 95d01a3ee97c registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 56 minutes ago Up 39 minutes (healthy) osismclient 2025-06-03 16:01:34.304181 | orchestrator | d964611e5153 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 56 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2025-06-03 16:01:34.304189 | orchestrator | d33e7e2591e3 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-listener-1 2025-06-03 16:01:34.304203 | orchestrator | c0191d02bab2 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-beat-1 2025-06-03 16:01:34.304222 | orchestrator | c1eb6b465ae4 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 56 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-03 16:01:34.304227 | orchestrator | 4beba1c5b4ef registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-flower-1 2025-06-03 16:01:34.304231 | orchestrator | 3a80b12ea556 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-03 16:01:34.537899 | orchestrator | 2025-06-03 16:01:34.537984 | orchestrator | ## Images @ testbed-manager 2025-06-03 16:01:34.537991 | orchestrator | 2025-06-03 16:01:34.537996 | orchestrator | + echo 2025-06-03 16:01:34.538002 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-03 16:01:34.538009 | orchestrator | + echo 2025-06-03 16:01:34.538042 | orchestrator | + osism container testbed-manager images 2025-06-03 16:01:36.625416 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-03 16:01:36.625528 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d16a1b460037 13 hours ago 11.5MB 2025-06-03 16:01:36.625549 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 5ede29b4dda4 13 hours ago 225MB 2025-06-03 16:01:36.625561 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 f5f0b51afbcc 27 hours ago 574MB 2025-06-03 16:01:36.625573 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 2 days ago 578MB 2025-06-03 16:01:36.625606 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 days ago 319MB 2025-06-03 16:01:36.625646 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 days ago 747MB 2025-06-03 16:01:36.625658 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 days ago 629MB 2025-06-03 16:01:36.625669 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 3 days ago 892MB 2025-06-03 16:01:36.625680 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 3 days ago 361MB 2025-06-03 16:01:36.625691 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 days ago 411MB 2025-06-03 16:01:36.625702 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 days ago 359MB 2025-06-03 16:01:36.625714 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 3 days ago 457MB 2025-06-03 16:01:36.625725 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 3 days ago 538MB 2025-06-03 16:01:36.625760 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 3 days ago 1.21GB 2025-06-03 16:01:36.625772 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 3 days ago 308MB 2025-06-03 16:01:36.625783 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 4 days ago 297MB 2025-06-03 16:01:36.625794 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 5 days ago 41.4MB 2025-06-03 16:01:36.625805 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 7 days ago 224MB 2025-06-03 16:01:36.625816 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 weeks ago 453MB 2025-06-03 16:01:36.625826 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 3 months ago 328MB 2025-06-03 16:01:36.625841 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-03 16:01:36.625860 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-03 16:01:36.625878 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-06-03 16:01:36.869772 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-03 16:01:36.870285 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-03 16:01:36.929110 | orchestrator | 2025-06-03 16:01:36.929207 | orchestrator | ## Containers @ testbed-node-0 2025-06-03 16:01:36.929221 | orchestrator | 2025-06-03 16:01:36.929230 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-03 16:01:36.929239 | orchestrator | + echo 2025-06-03 16:01:36.929248 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-03 16:01:36.929257 | orchestrator | + echo 2025-06-03 16:01:36.929265 | orchestrator | + osism container testbed-node-0 ps 2025-06-03 16:01:39.054306 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-03 16:01:39.054424 | orchestrator | 2b482715e442 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-03 16:01:39.054442 | orchestrator | 02cca478061c registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-03 16:01:39.054455 | orchestrator | a5011f034e96 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-03 16:01:39.054466 | orchestrator | 55329b28c815 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-03 16:01:39.054477 | orchestrator | 0ee3c5f2559f registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-03 16:01:39.054488 | orchestrator | 13dfb7bc42e5 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-03 16:01:39.054499 | orchestrator | cab4fd7ced08 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-06-03 16:01:39.054534 | orchestrator | 1b1918e17715 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-03 16:01:39.054548 | orchestrator | 8bee71a2fc3f registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-03 16:01:39.054582 | orchestrator | 6e373f564d1f registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-03 16:01:39.054594 | orchestrator | 0f875264f5a7 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-03 16:01:39.054605 | orchestrator | 2a1b919e4564 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-06-03 16:01:39.054616 | orchestrator | a1900c92d714 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-06-03 16:01:39.054655 | orchestrator | 9c5541977cc7 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-06-03 16:01:39.054667 | orchestrator | c8bead570df8 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-03 16:01:39.054678 | orchestrator | 86ef16681a0d registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-03 16:01:39.054690 | orchestrator | 96506bf5e969 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-03 16:01:39.054701 | orchestrator | 55ed2745eee4 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-03 16:01:39.054712 | orchestrator | 0b215f8a3ed1 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-03 16:01:39.054738 | orchestrator | c5f153314ee0 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-06-03 16:01:39.054749 | orchestrator | 3513f7393578 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-06-03 16:01:39.054760 | orchestrator | 6658d3512fe3 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-06-03 16:01:39.054771 | orchestrator | 4953f052d933 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-03 16:01:39.054782 | orchestrator | 406357494f42 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-06-03 16:01:39.054793 | orchestrator | 454e5d172cf9 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-06-03 16:01:39.054804 | orchestrator | 54be914ffb69 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-03 16:01:39.054823 | orchestrator | 6c807d41d317 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-03 16:01:39.054843 | orchestrator | f986e378c094 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-03 16:01:39.054855 | orchestrator | 46272496c8f7 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-06-03 16:01:39.054866 | orchestrator | 60c0f31eaa24 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-06-03 16:01:39.054882 | orchestrator | e16a58dbba13 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-03 16:01:39.054893 | orchestrator | 7c7f40f2253c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-06-03 16:01:39.054904 | orchestrator | 1d750838bd0a registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) keystone 2025-06-03 16:01:39.054915 | orchestrator | d3d6ee8c799d registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-03 16:01:39.054926 | orchestrator | 3c25053832cb registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-03 16:01:39.054937 | orchestrator | a1b2dbda071c registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-03 16:01:39.054948 | orchestrator | 2309a5e3429d registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-03 16:01:39.054964 | orchestrator | d160b22ea22c registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-03 16:01:39.054975 | orchestrator | 42d8bc166e5b registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-03 16:01:39.054986 | orchestrator | 6c7ff32b878b registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-03 16:01:39.055011 | orchestrator | b507db31cbf9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-06-03 16:01:39.055022 | orchestrator | cbee74601d23 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-03 16:01:39.055033 | orchestrator | f2b7d9fe0e2e registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-03 16:01:39.055044 | orchestrator | d7e45d80dbad registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-06-03 16:01:39.055055 | orchestrator | 33f81ef22a81 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-06-03 16:01:39.055076 | orchestrator | 268c04f8ec2a registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-06-03 16:01:39.055088 | orchestrator | 070801e6f086 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-03 16:01:39.055099 | orchestrator | fd85f8e36d0c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-06-03 16:01:39.055110 | orchestrator | 11997ba0bbf5 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-03 16:01:39.055121 | orchestrator | db3aa3e4e47a registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-03 16:01:39.055132 | orchestrator | e39c65b573e2 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-03 16:01:39.055143 | orchestrator | 936062ee7d54 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-03 16:01:39.055154 | orchestrator | dd5a397be5c7 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-03 16:01:39.055165 | orchestrator | 3ed845c1bec5 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-03 16:01:39.055175 | orchestrator | b506c34c4bfd registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-03 16:01:39.055186 | orchestrator | 8dd84f57feb1 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-03 16:01:39.055197 | orchestrator | 97e59e17e91a registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-03 16:01:39.311869 | orchestrator | 2025-06-03 16:01:39.311991 | orchestrator | ## Images @ testbed-node-0 2025-06-03 16:01:39.312014 | orchestrator | 2025-06-03 16:01:39.312028 | orchestrator | + echo 2025-06-03 16:01:39.312044 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-03 16:01:39.312061 | orchestrator | + echo 2025-06-03 16:01:39.312078 | orchestrator | + osism container testbed-node-0 images 2025-06-03 16:01:41.482989 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-03 16:01:41.483062 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 days ago 319MB 2025-06-03 16:01:41.483069 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 days ago 319MB 2025-06-03 16:01:41.483074 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 days ago 330MB 2025-06-03 16:01:41.483079 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 days ago 1.59GB 2025-06-03 16:01:41.483084 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 days ago 1.55GB 2025-06-03 16:01:41.483088 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 days ago 419MB 2025-06-03 16:01:41.483114 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 days ago 747MB 2025-06-03 16:01:41.483119 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 days ago 376MB 2025-06-03 16:01:41.483123 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 days ago 327MB 2025-06-03 16:01:41.483128 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 days ago 629MB 2025-06-03 16:01:41.483145 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 days ago 1.01GB 2025-06-03 16:01:41.483149 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 days ago 591MB 2025-06-03 16:01:41.483154 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 days ago 354MB 2025-06-03 16:01:41.483159 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 days ago 352MB 2025-06-03 16:01:41.483164 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 days ago 411MB 2025-06-03 16:01:41.483171 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 days ago 345MB 2025-06-03 16:01:41.483178 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 days ago 359MB 2025-06-03 16:01:41.483185 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 days ago 326MB 2025-06-03 16:01:41.483193 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 days ago 325MB 2025-06-03 16:01:41.483200 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 days ago 1.21GB 2025-06-03 16:01:41.483207 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 days ago 362MB 2025-06-03 16:01:41.483214 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 days ago 362MB 2025-06-03 16:01:41.483221 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 days ago 1.15GB 2025-06-03 16:01:41.483228 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 days ago 1.04GB 2025-06-03 16:01:41.483235 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 days ago 1.25GB 2025-06-03 16:01:41.483243 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 3 days ago 1.04GB 2025-06-03 16:01:41.483250 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 3 days ago 1.04GB 2025-06-03 16:01:41.483258 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 3 days ago 1.04GB 2025-06-03 16:01:41.483265 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 3 days ago 1.04GB 2025-06-03 16:01:41.483272 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 days ago 1.2GB 2025-06-03 16:01:41.483281 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 days ago 1.31GB 2025-06-03 16:01:41.483299 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 3 days ago 1.12GB 2025-06-03 16:01:41.483304 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 3 days ago 1.12GB 2025-06-03 16:01:41.483314 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 3 days ago 1.1GB 2025-06-03 16:01:41.483319 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 3 days ago 1.1GB 2025-06-03 16:01:41.483327 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 3 days ago 1.1GB 2025-06-03 16:01:41.483332 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 days ago 1.41GB 2025-06-03 16:01:41.483337 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 days ago 1.41GB 2025-06-03 16:01:41.483341 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 days ago 1.06GB 2025-06-03 16:01:41.483346 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 days ago 1.06GB 2025-06-03 16:01:41.483350 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 days ago 1.05GB 2025-06-03 16:01:41.483355 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 days ago 1.05GB 2025-06-03 16:01:41.483359 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 days ago 1.05GB 2025-06-03 16:01:41.483364 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 days ago 1.05GB 2025-06-03 16:01:41.483368 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 3 days ago 1.04GB 2025-06-03 16:01:41.483373 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 3 days ago 1.04GB 2025-06-03 16:01:41.483377 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 days ago 1.3GB 2025-06-03 16:01:41.483382 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 days ago 1.29GB 2025-06-03 16:01:41.483386 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 days ago 1.42GB 2025-06-03 16:01:41.483391 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 days ago 1.29GB 2025-06-03 16:01:41.483395 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 days ago 1.06GB 2025-06-03 16:01:41.483400 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 days ago 1.06GB 2025-06-03 16:01:41.483404 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 days ago 1.06GB 2025-06-03 16:01:41.483409 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 days ago 1.11GB 2025-06-03 16:01:41.483413 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 days ago 1.13GB 2025-06-03 16:01:41.483418 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 days ago 1.11GB 2025-06-03 16:01:41.483422 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 3 days ago 1.11GB 2025-06-03 16:01:41.483427 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 3 days ago 1.12GB 2025-06-03 16:01:41.483431 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 days ago 947MB 2025-06-03 16:01:41.483439 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 days ago 947MB 2025-06-03 16:01:41.483444 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 days ago 948MB 2025-06-03 16:01:41.483448 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 days ago 948MB 2025-06-03 16:01:41.483453 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-03 16:01:41.738254 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-03 16:01:41.738439 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-03 16:01:41.795678 | orchestrator | 2025-06-03 16:01:41.795777 | orchestrator | ## Containers @ testbed-node-1 2025-06-03 16:01:41.795795 | orchestrator | 2025-06-03 16:01:41.795809 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-03 16:01:41.795822 | orchestrator | + echo 2025-06-03 16:01:41.795836 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-03 16:01:41.795850 | orchestrator | + echo 2025-06-03 16:01:41.795862 | orchestrator | + osism container testbed-node-1 ps 2025-06-03 16:01:44.053733 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-03 16:01:44.053832 | orchestrator | b0ef49dfc9c0 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-03 16:01:44.053843 | orchestrator | e0e8b0ead3ec registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-03 16:01:44.053854 | orchestrator | f0cc0ae8ea60 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-03 16:01:44.053862 | orchestrator | 2de4a48de71b registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-03 16:01:44.053868 | orchestrator | 8817f3773600 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-03 16:01:44.053880 | orchestrator | 6c8dd98e0d6b registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-03 16:01:44.053886 | orchestrator | 3a1d6e115473 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-03 16:01:44.053892 | orchestrator | 06fa54a346a2 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-06-03 16:01:44.053900 | orchestrator | 2c148f440184 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-03 16:01:44.053906 | orchestrator | 8f34291c58bc registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-03 16:01:44.053913 | orchestrator | fb4af58f8bac registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-03 16:01:44.053919 | orchestrator | cbddd721b310 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-06-03 16:01:44.053948 | orchestrator | dca51ba141b4 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-06-03 16:01:44.053955 | orchestrator | ac0c2beca4b2 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-06-03 16:01:44.053961 | orchestrator | 87088472ad64 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-03 16:01:44.053967 | orchestrator | 359bbdaccd40 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-03 16:01:44.053973 | orchestrator | 169165fee7f5 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-03 16:01:44.053979 | orchestrator | 05f6c2c996db registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-03 16:01:44.054002 | orchestrator | f3087a1a434c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-06-03 16:01:44.054082 | orchestrator | 84d7468bd2e3 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-06-03 16:01:44.054090 | orchestrator | 9fddcba8dc3c registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-06-03 16:01:44.054100 | orchestrator | 337c298e1832 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-06-03 16:01:44.054106 | orchestrator | a5ad7e6628e2 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-03 16:01:44.054112 | orchestrator | dbad84504176 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-06-03 16:01:44.054118 | orchestrator | 2b653715d854 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-06-03 16:01:44.054131 | orchestrator | d78771e4e4de registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-03 16:01:44.054138 | orchestrator | 2e20e61efbeb registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-03 16:01:44.054147 | orchestrator | 5224348825f3 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-06-03 16:01:44.054153 | orchestrator | 435cbc84bcd6 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-06-03 16:01:44.054159 | orchestrator | 6acca6fa0e52 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-06-03 16:01:44.054172 | orchestrator | 54e714e98c9a registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-03 16:01:44.054178 | orchestrator | 74d50de8c8f1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-06-03 16:01:44.054184 | orchestrator | 958ed6eb621a registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-03 16:01:44.054190 | orchestrator | 52b92e84b2af registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-03 16:01:44.054195 | orchestrator | b3bcb57dbd04 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-03 16:01:44.054201 | orchestrator | 4009568e26eb registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-03 16:01:44.054207 | orchestrator | e4f448e6747b registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-03 16:01:44.054213 | orchestrator | bb7f422f6e44 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-06-03 16:01:44.054219 | orchestrator | b81e66c749f8 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-03 16:01:44.054225 | orchestrator | e19581a82bd5 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-03 16:01:44.054237 | orchestrator | fd9e94932e13 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-06-03 16:01:44.054250 | orchestrator | cdf180c3128b registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-03 16:01:44.054256 | orchestrator | 4e6bdf11571d registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-03 16:01:44.054265 | orchestrator | 11ebd3edadcd registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-06-03 16:01:44.054272 | orchestrator | b036d0c6175a registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-06-03 16:01:44.054280 | orchestrator | 405e46db0773 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-06-03 16:01:44.054287 | orchestrator | 9da3b53deb0b registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-06-03 16:01:44.054295 | orchestrator | cc88fbe8c55e registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-03 16:01:44.054301 | orchestrator | 5c92c998442e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-06-03 16:01:44.054340 | orchestrator | 7d5ccee6ed5b registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-03 16:01:44.054355 | orchestrator | adf231ea5c49 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-03 16:01:44.054363 | orchestrator | e9f6aa96482a registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-03 16:01:44.054373 | orchestrator | 3586551ff3e5 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-03 16:01:44.054379 | orchestrator | d0b94ed7b56a registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-03 16:01:44.054389 | orchestrator | 04e21a9ff947 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-03 16:01:44.054397 | orchestrator | 6c1ab96538e1 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-03 16:01:44.054406 | orchestrator | 67a54ea45a61 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-03 16:01:44.306676 | orchestrator | 2025-06-03 16:01:44.306746 | orchestrator | ## Images @ testbed-node-1 2025-06-03 16:01:44.306752 | orchestrator | 2025-06-03 16:01:44.306757 | orchestrator | + echo 2025-06-03 16:01:44.306761 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-03 16:01:44.306766 | orchestrator | + echo 2025-06-03 16:01:44.306771 | orchestrator | + osism container testbed-node-1 images 2025-06-03 16:01:46.440114 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-03 16:01:46.440174 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 days ago 319MB 2025-06-03 16:01:46.440190 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 days ago 319MB 2025-06-03 16:01:46.440204 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 days ago 330MB 2025-06-03 16:01:46.440218 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 days ago 1.59GB 2025-06-03 16:01:46.440232 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 days ago 1.55GB 2025-06-03 16:01:46.440242 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 days ago 419MB 2025-06-03 16:01:46.440251 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 days ago 747MB 2025-06-03 16:01:46.440259 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 days ago 376MB 2025-06-03 16:01:46.440268 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 days ago 327MB 2025-06-03 16:01:46.440277 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 days ago 629MB 2025-06-03 16:01:46.440286 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 days ago 1.01GB 2025-06-03 16:01:46.440316 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 days ago 591MB 2025-06-03 16:01:46.440325 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 days ago 354MB 2025-06-03 16:01:46.440334 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 days ago 411MB 2025-06-03 16:01:46.440359 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 days ago 352MB 2025-06-03 16:01:46.440368 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 days ago 345MB 2025-06-03 16:01:46.440377 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 days ago 359MB 2025-06-03 16:01:46.440385 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 days ago 326MB 2025-06-03 16:01:46.440394 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 days ago 325MB 2025-06-03 16:01:46.440403 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 days ago 1.21GB 2025-06-03 16:01:46.440411 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 days ago 362MB 2025-06-03 16:01:46.440454 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 days ago 362MB 2025-06-03 16:01:46.440464 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 days ago 1.15GB 2025-06-03 16:01:46.440472 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 days ago 1.04GB 2025-06-03 16:01:46.440481 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 days ago 1.25GB 2025-06-03 16:01:46.440489 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 days ago 1.2GB 2025-06-03 16:01:46.440503 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 days ago 1.31GB 2025-06-03 16:01:46.440512 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 3 days ago 1.12GB 2025-06-03 16:01:46.440599 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 3 days ago 1.12GB 2025-06-03 16:01:46.440609 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 3 days ago 1.1GB 2025-06-03 16:01:46.440768 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 3 days ago 1.1GB 2025-06-03 16:01:46.440805 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 3 days ago 1.1GB 2025-06-03 16:01:46.440818 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 days ago 1.41GB 2025-06-03 16:01:46.440841 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 days ago 1.41GB 2025-06-03 16:01:46.440852 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 days ago 1.06GB 2025-06-03 16:01:46.440890 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 days ago 1.06GB 2025-06-03 16:01:46.440902 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 days ago 1.05GB 2025-06-03 16:01:46.440913 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 days ago 1.05GB 2025-06-03 16:01:46.440934 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 days ago 1.05GB 2025-06-03 16:01:46.440944 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 days ago 1.05GB 2025-06-03 16:01:46.440955 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 days ago 1.3GB 2025-06-03 16:01:46.440965 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 days ago 1.29GB 2025-06-03 16:01:46.440976 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 days ago 1.42GB 2025-06-03 16:01:46.440986 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 days ago 1.29GB 2025-06-03 16:01:46.440997 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 days ago 1.06GB 2025-06-03 16:01:46.441007 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 days ago 1.06GB 2025-06-03 16:01:46.441017 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 days ago 1.06GB 2025-06-03 16:01:46.441028 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 days ago 1.11GB 2025-06-03 16:01:46.441038 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 days ago 1.13GB 2025-06-03 16:01:46.441049 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 days ago 1.11GB 2025-06-03 16:01:46.441058 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 days ago 947MB 2025-06-03 16:01:46.441068 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 days ago 948MB 2025-06-03 16:01:46.441077 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 days ago 947MB 2025-06-03 16:01:46.441086 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 days ago 948MB 2025-06-03 16:01:46.441094 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-03 16:01:46.675533 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-03 16:01:46.676194 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-03 16:01:46.712583 | orchestrator | 2025-06-03 16:01:46.712717 | orchestrator | ## Containers @ testbed-node-2 2025-06-03 16:01:46.712734 | orchestrator | 2025-06-03 16:01:46.712746 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-03 16:01:46.712758 | orchestrator | + echo 2025-06-03 16:01:46.712770 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-03 16:01:46.712782 | orchestrator | + echo 2025-06-03 16:01:46.712793 | orchestrator | + osism container testbed-node-2 ps 2025-06-03 16:01:48.818725 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-03 16:01:48.818844 | orchestrator | 76561f7e204e registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-03 16:01:48.818864 | orchestrator | a7ef1beb1f1b registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-03 16:01:48.818876 | orchestrator | a56fd6ab2360 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-03 16:01:48.818911 | orchestrator | 9c5d77943940 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-03 16:01:48.818924 | orchestrator | 09170c49b1f7 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-03 16:01:48.818936 | orchestrator | b5717b0808b4 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-03 16:01:48.818946 | orchestrator | f2e6aaa56d0a registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-03 16:01:48.818953 | orchestrator | 9e7463fbb980 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-06-03 16:01:48.818960 | orchestrator | 32a55737eeb2 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-03 16:01:48.818967 | orchestrator | f716d70539c8 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-03 16:01:48.818973 | orchestrator | f8bde30bd969 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-03 16:01:48.818980 | orchestrator | 454b0b854b33 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-06-03 16:01:48.818987 | orchestrator | 7edeeb83fe12 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-06-03 16:01:48.818993 | orchestrator | e2a6f54dad14 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-06-03 16:01:48.819000 | orchestrator | d1ba74dbde60 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-03 16:01:48.819007 | orchestrator | 9ec5ea9abdea registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-03 16:01:48.819013 | orchestrator | 8f10ac5ae123 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-03 16:01:48.819020 | orchestrator | 4f2aff08c3d5 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-03 16:01:48.819027 | orchestrator | 5997cc1e19a9 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-06-03 16:01:48.819050 | orchestrator | 47338eca75dd registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-06-03 16:01:48.819061 | orchestrator | 4d95366097af registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-06-03 16:01:48.819072 | orchestrator | 83c551303465 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-06-03 16:01:48.819094 | orchestrator | 25881dd6888e registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-03 16:01:48.819104 | orchestrator | fc29eee48793 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-06-03 16:01:48.819115 | orchestrator | 9a697a5a8865 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-06-03 16:01:48.819127 | orchestrator | 9acf52ae5d45 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-03 16:01:48.819138 | orchestrator | 0d85aa0861af registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-03 16:01:48.819152 | orchestrator | 22ee84be54e2 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-06-03 16:01:48.819163 | orchestrator | b21ebe4916ba registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-06-03 16:01:48.819175 | orchestrator | ee19b6db7a0c registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-06-03 16:01:48.819184 | orchestrator | ddab5fe66b35 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-03 16:01:48.819192 | orchestrator | 62c822517db6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-06-03 16:01:48.819206 | orchestrator | a33278457615 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-03 16:01:48.819214 | orchestrator | 68cca5526578 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-03 16:01:48.819221 | orchestrator | 485b1b9750b5 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-03 16:01:48.819229 | orchestrator | c3b1e649fb56 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-03 16:01:48.819237 | orchestrator | 55e696ea24db registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-03 16:01:48.819245 | orchestrator | 595d72437523 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-03 16:01:48.819253 | orchestrator | fb1e79f39066 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-06-03 16:01:48.819261 | orchestrator | f35f1caa5e7d registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-03 16:01:48.819281 | orchestrator | 3ec4a5b4422f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-06-03 16:01:48.819289 | orchestrator | c9ab587e8cce registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-03 16:01:48.819301 | orchestrator | 086d47d17110 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-03 16:01:48.819309 | orchestrator | 9386798c3187 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-06-03 16:01:48.819317 | orchestrator | 1870f95c0f45 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-06-03 16:01:48.819324 | orchestrator | 233308c21a52 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-06-03 16:01:48.819332 | orchestrator | 4f0bdc1e8d2f registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-03 16:01:48.819340 | orchestrator | bc0e48f1fcfb registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-03 16:01:48.819348 | orchestrator | 586ffcfd7931 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-06-03 16:01:48.819356 | orchestrator | f77284f440ac registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-03 16:01:48.819364 | orchestrator | 066f5f4593bf registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-03 16:01:48.819372 | orchestrator | 2d53c2023633 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-03 16:01:48.819379 | orchestrator | 8c0143cc3cf8 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-03 16:01:48.819388 | orchestrator | 52370bde398d registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-03 16:01:48.819395 | orchestrator | b0ae8d590712 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-03 16:01:48.819403 | orchestrator | 09fe6ce55ffa registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-03 16:01:48.819411 | orchestrator | 6e4e86fb1a1b registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-03 16:01:49.063758 | orchestrator | 2025-06-03 16:01:49.063848 | orchestrator | ## Images @ testbed-node-2 2025-06-03 16:01:49.063860 | orchestrator | 2025-06-03 16:01:49.063867 | orchestrator | + echo 2025-06-03 16:01:49.063899 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-03 16:01:49.063907 | orchestrator | + echo 2025-06-03 16:01:49.063922 | orchestrator | + osism container testbed-node-2 images 2025-06-03 16:01:51.251973 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-03 16:01:51.252083 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 days ago 319MB 2025-06-03 16:01:51.252105 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 days ago 319MB 2025-06-03 16:01:51.252123 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 days ago 330MB 2025-06-03 16:01:51.252142 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 days ago 1.59GB 2025-06-03 16:01:51.252161 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 days ago 1.55GB 2025-06-03 16:01:51.252179 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 days ago 419MB 2025-06-03 16:01:51.252197 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 days ago 747MB 2025-06-03 16:01:51.252215 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 days ago 327MB 2025-06-03 16:01:51.252232 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 days ago 376MB 2025-06-03 16:01:51.252249 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 days ago 629MB 2025-06-03 16:01:51.252265 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 days ago 1.01GB 2025-06-03 16:01:51.252281 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 days ago 591MB 2025-06-03 16:01:51.252299 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 days ago 354MB 2025-06-03 16:01:51.252318 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 days ago 352MB 2025-06-03 16:01:51.252336 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 days ago 411MB 2025-06-03 16:01:51.252354 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 days ago 345MB 2025-06-03 16:01:51.252408 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 days ago 359MB 2025-06-03 16:01:51.252454 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 days ago 325MB 2025-06-03 16:01:51.252473 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 days ago 326MB 2025-06-03 16:01:51.252491 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 days ago 1.21GB 2025-06-03 16:01:51.252503 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 days ago 362MB 2025-06-03 16:01:51.252516 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 days ago 362MB 2025-06-03 16:01:51.252529 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 days ago 1.15GB 2025-06-03 16:01:51.252542 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 days ago 1.04GB 2025-06-03 16:01:51.252555 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 days ago 1.25GB 2025-06-03 16:01:51.252592 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 days ago 1.2GB 2025-06-03 16:01:51.252606 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 days ago 1.31GB 2025-06-03 16:01:51.252619 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 3 days ago 1.12GB 2025-06-03 16:01:51.252632 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 3 days ago 1.12GB 2025-06-03 16:01:51.252645 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 3 days ago 1.1GB 2025-06-03 16:01:51.252766 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 3 days ago 1.1GB 2025-06-03 16:01:51.252800 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 3 days ago 1.1GB 2025-06-03 16:01:51.252814 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 days ago 1.41GB 2025-06-03 16:01:51.252827 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 days ago 1.41GB 2025-06-03 16:01:51.252841 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 days ago 1.06GB 2025-06-03 16:01:51.252853 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 days ago 1.06GB 2025-06-03 16:01:51.252866 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 days ago 1.05GB 2025-06-03 16:01:51.252878 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 days ago 1.05GB 2025-06-03 16:01:51.252891 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 days ago 1.05GB 2025-06-03 16:01:51.252903 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 days ago 1.05GB 2025-06-03 16:01:51.252920 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 days ago 1.3GB 2025-06-03 16:01:51.252931 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 days ago 1.29GB 2025-06-03 16:01:51.252943 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 days ago 1.42GB 2025-06-03 16:01:51.252962 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 days ago 1.29GB 2025-06-03 16:01:51.252984 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 days ago 1.06GB 2025-06-03 16:01:51.253009 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 days ago 1.06GB 2025-06-03 16:01:51.253027 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 days ago 1.06GB 2025-06-03 16:01:51.253045 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 days ago 1.11GB 2025-06-03 16:01:51.253061 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 days ago 1.13GB 2025-06-03 16:01:51.253076 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 days ago 1.11GB 2025-06-03 16:01:51.253092 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 days ago 947MB 2025-06-03 16:01:51.253123 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 days ago 947MB 2025-06-03 16:01:51.253140 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 days ago 948MB 2025-06-03 16:01:51.253157 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 days ago 948MB 2025-06-03 16:01:51.253174 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-03 16:01:51.497142 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-03 16:01:51.506514 | orchestrator | + set -e 2025-06-03 16:01:51.506610 | orchestrator | + source /opt/manager-vars.sh 2025-06-03 16:01:51.507481 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-03 16:01:51.507550 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-03 16:01:51.507563 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-03 16:01:51.507573 | orchestrator | ++ CEPH_VERSION=reef 2025-06-03 16:01:51.507583 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-03 16:01:51.507595 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-03 16:01:51.507605 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-03 16:01:51.507615 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-03 16:01:51.507625 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-03 16:01:51.507634 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-03 16:01:51.507644 | orchestrator | ++ export ARA=false 2025-06-03 16:01:51.507689 | orchestrator | ++ ARA=false 2025-06-03 16:01:51.507759 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-03 16:01:51.507770 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-03 16:01:51.507780 | orchestrator | ++ export TEMPEST=false 2025-06-03 16:01:51.507914 | orchestrator | ++ TEMPEST=false 2025-06-03 16:01:51.507927 | orchestrator | ++ export IS_ZUUL=true 2025-06-03 16:01:51.507936 | orchestrator | ++ IS_ZUUL=true 2025-06-03 16:01:51.507946 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 16:01:51.507961 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 16:01:51.507971 | orchestrator | ++ export EXTERNAL_API=false 2025-06-03 16:01:51.507985 | orchestrator | ++ EXTERNAL_API=false 2025-06-03 16:01:51.508002 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-03 16:01:51.508019 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-03 16:01:51.508035 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-03 16:01:51.508051 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-03 16:01:51.508067 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-03 16:01:51.508083 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-03 16:01:51.508098 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-03 16:01:51.508114 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-03 16:01:51.516892 | orchestrator | + set -e 2025-06-03 16:01:51.516991 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 16:01:51.517006 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 16:01:51.517019 | orchestrator | ++ INTERACTIVE=false 2025-06-03 16:01:51.517031 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 16:01:51.517042 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 16:01:51.517077 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-03 16:01:51.517920 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-03 16:01:51.524719 | orchestrator | 2025-06-03 16:01:51.524812 | orchestrator | # Ceph status 2025-06-03 16:01:51.524833 | orchestrator | 2025-06-03 16:01:51.524850 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-03 16:01:51.524867 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-03 16:01:51.524885 | orchestrator | + echo 2025-06-03 16:01:51.524902 | orchestrator | + echo '# Ceph status' 2025-06-03 16:01:51.524919 | orchestrator | + echo 2025-06-03 16:01:51.524937 | orchestrator | + ceph -s 2025-06-03 16:01:52.093520 | orchestrator | cluster: 2025-06-03 16:01:52.093620 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-03 16:01:52.093633 | orchestrator | health: HEALTH_OK 2025-06-03 16:01:52.093643 | orchestrator | 2025-06-03 16:01:52.093708 | orchestrator | services: 2025-06-03 16:01:52.093718 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-06-03 16:01:52.093728 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2025-06-03 16:01:52.093738 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-03 16:01:52.093747 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-06-03 16:01:52.093783 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-03 16:01:52.093791 | orchestrator | 2025-06-03 16:01:52.093799 | orchestrator | data: 2025-06-03 16:01:52.093807 | orchestrator | volumes: 1/1 healthy 2025-06-03 16:01:52.093816 | orchestrator | pools: 14 pools, 401 pgs 2025-06-03 16:01:52.093824 | orchestrator | objects: 520 objects, 2.2 GiB 2025-06-03 16:01:52.093832 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-03 16:01:52.093840 | orchestrator | pgs: 401 active+clean 2025-06-03 16:01:52.093847 | orchestrator | 2025-06-03 16:01:52.137107 | orchestrator | 2025-06-03 16:01:52.137232 | orchestrator | # Ceph versions 2025-06-03 16:01:52.137256 | orchestrator | 2025-06-03 16:01:52.137275 | orchestrator | + echo 2025-06-03 16:01:52.137294 | orchestrator | + echo '# Ceph versions' 2025-06-03 16:01:52.137314 | orchestrator | + echo 2025-06-03 16:01:52.137331 | orchestrator | + ceph versions 2025-06-03 16:01:52.742902 | orchestrator | { 2025-06-03 16:01:52.743031 | orchestrator | "mon": { 2025-06-03 16:01:52.743049 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-03 16:01:52.743063 | orchestrator | }, 2025-06-03 16:01:52.743074 | orchestrator | "mgr": { 2025-06-03 16:01:52.743085 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-03 16:01:52.743096 | orchestrator | }, 2025-06-03 16:01:52.743107 | orchestrator | "osd": { 2025-06-03 16:01:52.743118 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-03 16:01:52.743128 | orchestrator | }, 2025-06-03 16:01:52.743139 | orchestrator | "mds": { 2025-06-03 16:01:52.743150 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-03 16:01:52.743160 | orchestrator | }, 2025-06-03 16:01:52.743171 | orchestrator | "rgw": { 2025-06-03 16:01:52.743181 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-03 16:01:52.743192 | orchestrator | }, 2025-06-03 16:01:52.743203 | orchestrator | "overall": { 2025-06-03 16:01:52.743215 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-03 16:01:52.743235 | orchestrator | } 2025-06-03 16:01:52.743251 | orchestrator | } 2025-06-03 16:01:52.795609 | orchestrator | 2025-06-03 16:01:52.795740 | orchestrator | # Ceph OSD tree 2025-06-03 16:01:52.795755 | orchestrator | 2025-06-03 16:01:52.795764 | orchestrator | + echo 2025-06-03 16:01:52.795774 | orchestrator | + echo '# Ceph OSD tree' 2025-06-03 16:01:52.795783 | orchestrator | + echo 2025-06-03 16:01:52.795791 | orchestrator | + ceph osd df tree 2025-06-03 16:01:53.302391 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-03 16:01:53.302533 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-03 16:01:53.302548 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-03 16:01:53.302576 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.43 1.26 201 up osd.0 2025-06-03 16:01:53.302588 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 900 MiB 827 MiB 1 KiB 74 MiB 19 GiB 4.40 0.74 189 up osd.5 2025-06-03 16:01:53.302599 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-03 16:01:53.302611 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.16 1.04 190 up osd.1 2025-06-03 16:01:53.302622 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.67 0.96 202 up osd.4 2025-06-03 16:01:53.302633 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-03 16:01:53.302644 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.24 1.22 189 up osd.2 2025-06-03 16:01:53.302716 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 940 MiB 867 MiB 1 KiB 74 MiB 19 GiB 4.60 0.78 199 up osd.3 2025-06-03 16:01:53.302755 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-03 16:01:53.302767 | orchestrator | MIN/MAX VAR: 0.74/1.26 STDDEV: 1.17 2025-06-03 16:01:53.341816 | orchestrator | 2025-06-03 16:01:53.341924 | orchestrator | # Ceph monitor status 2025-06-03 16:01:53.341948 | orchestrator | 2025-06-03 16:01:53.341967 | orchestrator | + echo 2025-06-03 16:01:53.341985 | orchestrator | + echo '# Ceph monitor status' 2025-06-03 16:01:53.342003 | orchestrator | + echo 2025-06-03 16:01:53.342070 | orchestrator | + ceph mon stat 2025-06-03 16:01:53.906774 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-03 16:01:53.950311 | orchestrator | 2025-06-03 16:01:53.950392 | orchestrator | # Ceph quorum status 2025-06-03 16:01:53.950403 | orchestrator | 2025-06-03 16:01:53.950411 | orchestrator | + echo 2025-06-03 16:01:53.950418 | orchestrator | + echo '# Ceph quorum status' 2025-06-03 16:01:53.950426 | orchestrator | + echo 2025-06-03 16:01:53.950876 | orchestrator | + ceph quorum_status 2025-06-03 16:01:53.951078 | orchestrator | + jq 2025-06-03 16:01:54.581572 | orchestrator | { 2025-06-03 16:01:54.581700 | orchestrator | "election_epoch": 4, 2025-06-03 16:01:54.581712 | orchestrator | "quorum": [ 2025-06-03 16:01:54.581718 | orchestrator | 0, 2025-06-03 16:01:54.581724 | orchestrator | 1, 2025-06-03 16:01:54.581730 | orchestrator | 2 2025-06-03 16:01:54.581735 | orchestrator | ], 2025-06-03 16:01:54.581741 | orchestrator | "quorum_names": [ 2025-06-03 16:01:54.581746 | orchestrator | "testbed-node-0", 2025-06-03 16:01:54.581752 | orchestrator | "testbed-node-1", 2025-06-03 16:01:54.581758 | orchestrator | "testbed-node-2" 2025-06-03 16:01:54.581763 | orchestrator | ], 2025-06-03 16:01:54.581769 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-03 16:01:54.581776 | orchestrator | "quorum_age": 1690, 2025-06-03 16:01:54.581781 | orchestrator | "features": { 2025-06-03 16:01:54.581787 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-03 16:01:54.581792 | orchestrator | "quorum_mon": [ 2025-06-03 16:01:54.581798 | orchestrator | "kraken", 2025-06-03 16:01:54.581803 | orchestrator | "luminous", 2025-06-03 16:01:54.581809 | orchestrator | "mimic", 2025-06-03 16:01:54.581815 | orchestrator | "osdmap-prune", 2025-06-03 16:01:54.581820 | orchestrator | "nautilus", 2025-06-03 16:01:54.581825 | orchestrator | "octopus", 2025-06-03 16:01:54.581831 | orchestrator | "pacific", 2025-06-03 16:01:54.581836 | orchestrator | "elector-pinging", 2025-06-03 16:01:54.581842 | orchestrator | "quincy", 2025-06-03 16:01:54.581847 | orchestrator | "reef" 2025-06-03 16:01:54.581853 | orchestrator | ] 2025-06-03 16:01:54.581858 | orchestrator | }, 2025-06-03 16:01:54.581864 | orchestrator | "monmap": { 2025-06-03 16:01:54.581869 | orchestrator | "epoch": 1, 2025-06-03 16:01:54.581875 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-03 16:01:54.581881 | orchestrator | "modified": "2025-06-03T15:33:31.878128Z", 2025-06-03 16:01:54.581887 | orchestrator | "created": "2025-06-03T15:33:31.878128Z", 2025-06-03 16:01:54.581892 | orchestrator | "min_mon_release": 18, 2025-06-03 16:01:54.581898 | orchestrator | "min_mon_release_name": "reef", 2025-06-03 16:01:54.581903 | orchestrator | "election_strategy": 1, 2025-06-03 16:01:54.581909 | orchestrator | "disallowed_leaders: ": "", 2025-06-03 16:01:54.581914 | orchestrator | "stretch_mode": false, 2025-06-03 16:01:54.581919 | orchestrator | "tiebreaker_mon": "", 2025-06-03 16:01:54.581925 | orchestrator | "removed_ranks: ": "", 2025-06-03 16:01:54.581930 | orchestrator | "features": { 2025-06-03 16:01:54.581936 | orchestrator | "persistent": [ 2025-06-03 16:01:54.581941 | orchestrator | "kraken", 2025-06-03 16:01:54.581946 | orchestrator | "luminous", 2025-06-03 16:01:54.581951 | orchestrator | "mimic", 2025-06-03 16:01:54.581957 | orchestrator | "osdmap-prune", 2025-06-03 16:01:54.581962 | orchestrator | "nautilus", 2025-06-03 16:01:54.581967 | orchestrator | "octopus", 2025-06-03 16:01:54.581973 | orchestrator | "pacific", 2025-06-03 16:01:54.581978 | orchestrator | "elector-pinging", 2025-06-03 16:01:54.581983 | orchestrator | "quincy", 2025-06-03 16:01:54.581989 | orchestrator | "reef" 2025-06-03 16:01:54.581994 | orchestrator | ], 2025-06-03 16:01:54.581999 | orchestrator | "optional": [] 2025-06-03 16:01:54.582005 | orchestrator | }, 2025-06-03 16:01:54.582010 | orchestrator | "mons": [ 2025-06-03 16:01:54.582067 | orchestrator | { 2025-06-03 16:01:54.582072 | orchestrator | "rank": 0, 2025-06-03 16:01:54.582099 | orchestrator | "name": "testbed-node-0", 2025-06-03 16:01:54.582105 | orchestrator | "public_addrs": { 2025-06-03 16:01:54.582112 | orchestrator | "addrvec": [ 2025-06-03 16:01:54.582118 | orchestrator | { 2025-06-03 16:01:54.582125 | orchestrator | "type": "v2", 2025-06-03 16:01:54.582131 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-03 16:01:54.582137 | orchestrator | "nonce": 0 2025-06-03 16:01:54.582144 | orchestrator | }, 2025-06-03 16:01:54.582267 | orchestrator | { 2025-06-03 16:01:54.582279 | orchestrator | "type": "v1", 2025-06-03 16:01:54.582286 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-03 16:01:54.582292 | orchestrator | "nonce": 0 2025-06-03 16:01:54.582299 | orchestrator | } 2025-06-03 16:01:54.582305 | orchestrator | ] 2025-06-03 16:01:54.582311 | orchestrator | }, 2025-06-03 16:01:54.582318 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-03 16:01:54.582325 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-03 16:01:54.582331 | orchestrator | "priority": 0, 2025-06-03 16:01:54.582337 | orchestrator | "weight": 0, 2025-06-03 16:01:54.582343 | orchestrator | "crush_location": "{}" 2025-06-03 16:01:54.582349 | orchestrator | }, 2025-06-03 16:01:54.582356 | orchestrator | { 2025-06-03 16:01:54.582362 | orchestrator | "rank": 1, 2025-06-03 16:01:54.582369 | orchestrator | "name": "testbed-node-1", 2025-06-03 16:01:54.582375 | orchestrator | "public_addrs": { 2025-06-03 16:01:54.582382 | orchestrator | "addrvec": [ 2025-06-03 16:01:54.582388 | orchestrator | { 2025-06-03 16:01:54.582394 | orchestrator | "type": "v2", 2025-06-03 16:01:54.582401 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-03 16:01:54.582408 | orchestrator | "nonce": 0 2025-06-03 16:01:54.582414 | orchestrator | }, 2025-06-03 16:01:54.582420 | orchestrator | { 2025-06-03 16:01:54.582426 | orchestrator | "type": "v1", 2025-06-03 16:01:54.582433 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-03 16:01:54.582439 | orchestrator | "nonce": 0 2025-06-03 16:01:54.582446 | orchestrator | } 2025-06-03 16:01:54.582452 | orchestrator | ] 2025-06-03 16:01:54.582458 | orchestrator | }, 2025-06-03 16:01:54.582464 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-03 16:01:54.582471 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-03 16:01:54.582477 | orchestrator | "priority": 0, 2025-06-03 16:01:54.582486 | orchestrator | "weight": 0, 2025-06-03 16:01:54.582495 | orchestrator | "crush_location": "{}" 2025-06-03 16:01:54.582503 | orchestrator | }, 2025-06-03 16:01:54.582512 | orchestrator | { 2025-06-03 16:01:54.582520 | orchestrator | "rank": 2, 2025-06-03 16:01:54.582529 | orchestrator | "name": "testbed-node-2", 2025-06-03 16:01:54.582538 | orchestrator | "public_addrs": { 2025-06-03 16:01:54.582546 | orchestrator | "addrvec": [ 2025-06-03 16:01:54.582554 | orchestrator | { 2025-06-03 16:01:54.582563 | orchestrator | "type": "v2", 2025-06-03 16:01:54.582572 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-03 16:01:54.582581 | orchestrator | "nonce": 0 2025-06-03 16:01:54.582590 | orchestrator | }, 2025-06-03 16:01:54.582599 | orchestrator | { 2025-06-03 16:01:54.582608 | orchestrator | "type": "v1", 2025-06-03 16:01:54.582617 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-03 16:01:54.582627 | orchestrator | "nonce": 0 2025-06-03 16:01:54.582636 | orchestrator | } 2025-06-03 16:01:54.582643 | orchestrator | ] 2025-06-03 16:01:54.582648 | orchestrator | }, 2025-06-03 16:01:54.582671 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-03 16:01:54.582677 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-03 16:01:54.582683 | orchestrator | "priority": 0, 2025-06-03 16:01:54.582688 | orchestrator | "weight": 0, 2025-06-03 16:01:54.582694 | orchestrator | "crush_location": "{}" 2025-06-03 16:01:54.582699 | orchestrator | } 2025-06-03 16:01:54.582705 | orchestrator | ] 2025-06-03 16:01:54.582710 | orchestrator | } 2025-06-03 16:01:54.582715 | orchestrator | } 2025-06-03 16:01:54.582730 | orchestrator | 2025-06-03 16:01:54.582736 | orchestrator | # Ceph free space status 2025-06-03 16:01:54.582742 | orchestrator | 2025-06-03 16:01:54.582747 | orchestrator | + echo 2025-06-03 16:01:54.582752 | orchestrator | + echo '# Ceph free space status' 2025-06-03 16:01:54.582758 | orchestrator | + echo 2025-06-03 16:01:54.582763 | orchestrator | + ceph df 2025-06-03 16:01:55.218847 | orchestrator | --- RAW STORAGE --- 2025-06-03 16:01:55.218968 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-03 16:01:55.219047 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-03 16:01:55.219070 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-03 16:01:55.219089 | orchestrator | 2025-06-03 16:01:55.219109 | orchestrator | --- POOLS --- 2025-06-03 16:01:55.219122 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-03 16:01:55.219134 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-06-03 16:01:55.219145 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-03 16:01:55.219157 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-03 16:01:55.219175 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-03 16:01:55.219194 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-03 16:01:55.219213 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-03 16:01:55.219231 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-03 16:01:55.219247 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-03 16:01:55.219258 | orchestrator | .rgw.root 9 32 1.4 KiB 4 32 KiB 0 52 GiB 2025-06-03 16:01:55.219269 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-03 16:01:55.219279 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-03 16:01:55.219297 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2025-06-03 16:01:55.219314 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-03 16:01:55.219333 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-03 16:01:55.273602 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-03 16:01:55.321474 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-03 16:01:55.321591 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-03 16:01:55.321608 | orchestrator | + osism apply facts 2025-06-03 16:01:57.034143 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:01:57.034259 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:01:57.034282 | orchestrator | Registering Redlock._release_script 2025-06-03 16:01:57.098764 | orchestrator | 2025-06-03 16:01:57 | INFO  | Task d9cff99f-e98c-4cb1-b1b1-a142c23dd566 (facts) was prepared for execution. 2025-06-03 16:01:57.098876 | orchestrator | 2025-06-03 16:01:57 | INFO  | It takes a moment until task d9cff99f-e98c-4cb1-b1b1-a142c23dd566 (facts) has been started and output is visible here. 2025-06-03 16:02:01.301595 | orchestrator | 2025-06-03 16:02:01.302460 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-03 16:02:01.303261 | orchestrator | 2025-06-03 16:02:01.304276 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-03 16:02:01.304594 | orchestrator | Tuesday 03 June 2025 16:02:01 +0000 (0:00:00.267) 0:00:00.267 ********** 2025-06-03 16:02:02.817838 | orchestrator | ok: [testbed-manager] 2025-06-03 16:02:02.819942 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:02.820772 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:02.821343 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:02:02.829560 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:02.829627 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:02:02.829640 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:02:02.829653 | orchestrator | 2025-06-03 16:02:02.829667 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-03 16:02:02.830283 | orchestrator | Tuesday 03 June 2025 16:02:02 +0000 (0:00:01.512) 0:00:01.779 ********** 2025-06-03 16:02:02.997930 | orchestrator | skipping: [testbed-manager] 2025-06-03 16:02:03.084777 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:03.192847 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:02:03.265758 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:02:03.346404 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:02:04.115484 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:02:04.115649 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:02:04.116451 | orchestrator | 2025-06-03 16:02:04.116999 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 16:02:04.117715 | orchestrator | 2025-06-03 16:02:04.118315 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 16:02:04.118733 | orchestrator | Tuesday 03 June 2025 16:02:04 +0000 (0:00:01.303) 0:00:03.083 ********** 2025-06-03 16:02:09.389582 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:09.390756 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:09.391676 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:09.393069 | orchestrator | ok: [testbed-manager] 2025-06-03 16:02:09.394713 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:02:09.395349 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:02:09.396744 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:02:09.397532 | orchestrator | 2025-06-03 16:02:09.398356 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-03 16:02:09.399398 | orchestrator | 2025-06-03 16:02:09.400217 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-03 16:02:09.401010 | orchestrator | Tuesday 03 June 2025 16:02:09 +0000 (0:00:05.273) 0:00:08.357 ********** 2025-06-03 16:02:09.559068 | orchestrator | skipping: [testbed-manager] 2025-06-03 16:02:09.637456 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:09.719895 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:02:09.799552 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:02:09.882900 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:02:09.929262 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:02:09.930459 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:02:09.931731 | orchestrator | 2025-06-03 16:02:09.932330 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:02:09.933071 | orchestrator | 2025-06-03 16:02:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 16:02:09.933112 | orchestrator | 2025-06-03 16:02:09 | INFO  | Please wait and do not abort execution. 2025-06-03 16:02:09.933557 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:09.934066 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:09.934831 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:09.935092 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:09.935658 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:09.936561 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:09.936836 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:09.937313 | orchestrator | 2025-06-03 16:02:09.938008 | orchestrator | 2025-06-03 16:02:09.938419 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:02:09.938931 | orchestrator | Tuesday 03 June 2025 16:02:09 +0000 (0:00:00.540) 0:00:08.897 ********** 2025-06-03 16:02:09.939337 | orchestrator | =============================================================================== 2025-06-03 16:02:09.939825 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.27s 2025-06-03 16:02:09.940161 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.51s 2025-06-03 16:02:09.940891 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.30s 2025-06-03 16:02:09.941105 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-06-03 16:02:10.633983 | orchestrator | + osism validate ceph-mons 2025-06-03 16:02:12.348596 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:02:12.348754 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:02:12.348781 | orchestrator | Registering Redlock._release_script 2025-06-03 16:02:32.400478 | orchestrator | 2025-06-03 16:02:32.400616 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-03 16:02:32.400647 | orchestrator | 2025-06-03 16:02:32.400666 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-03 16:02:32.400683 | orchestrator | Tuesday 03 June 2025 16:02:16 +0000 (0:00:00.437) 0:00:00.437 ********** 2025-06-03 16:02:32.400703 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:32.400721 | orchestrator | 2025-06-03 16:02:32.400832 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-03 16:02:32.400845 | orchestrator | Tuesday 03 June 2025 16:02:17 +0000 (0:00:00.669) 0:00:01.107 ********** 2025-06-03 16:02:32.400856 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:32.400867 | orchestrator | 2025-06-03 16:02:32.400878 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-03 16:02:32.400889 | orchestrator | Tuesday 03 June 2025 16:02:18 +0000 (0:00:00.823) 0:00:01.930 ********** 2025-06-03 16:02:32.400900 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.400913 | orchestrator | 2025-06-03 16:02:32.400942 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-03 16:02:32.400954 | orchestrator | Tuesday 03 June 2025 16:02:18 +0000 (0:00:00.236) 0:00:02.166 ********** 2025-06-03 16:02:32.400965 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.400976 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:32.400987 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:32.400997 | orchestrator | 2025-06-03 16:02:32.401008 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-03 16:02:32.401019 | orchestrator | Tuesday 03 June 2025 16:02:18 +0000 (0:00:00.311) 0:00:02.478 ********** 2025-06-03 16:02:32.401030 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:32.401041 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:32.401052 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.401062 | orchestrator | 2025-06-03 16:02:32.401073 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-03 16:02:32.401084 | orchestrator | Tuesday 03 June 2025 16:02:19 +0000 (0:00:01.093) 0:00:03.571 ********** 2025-06-03 16:02:32.401095 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.401107 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:02:32.401117 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:02:32.401128 | orchestrator | 2025-06-03 16:02:32.401139 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-03 16:02:32.401150 | orchestrator | Tuesday 03 June 2025 16:02:20 +0000 (0:00:00.276) 0:00:03.848 ********** 2025-06-03 16:02:32.401161 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.401172 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:32.401183 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:32.401193 | orchestrator | 2025-06-03 16:02:32.401204 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:02:32.401215 | orchestrator | Tuesday 03 June 2025 16:02:20 +0000 (0:00:00.524) 0:00:04.372 ********** 2025-06-03 16:02:32.401226 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.401237 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:32.401248 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:32.401259 | orchestrator | 2025-06-03 16:02:32.401270 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-03 16:02:32.401281 | orchestrator | Tuesday 03 June 2025 16:02:20 +0000 (0:00:00.305) 0:00:04.677 ********** 2025-06-03 16:02:32.401292 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.401328 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:02:32.401340 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:02:32.401351 | orchestrator | 2025-06-03 16:02:32.401361 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-03 16:02:32.401372 | orchestrator | Tuesday 03 June 2025 16:02:21 +0000 (0:00:00.296) 0:00:04.974 ********** 2025-06-03 16:02:32.401383 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.401393 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:32.401404 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:32.401414 | orchestrator | 2025-06-03 16:02:32.401425 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:02:32.401436 | orchestrator | Tuesday 03 June 2025 16:02:21 +0000 (0:00:00.289) 0:00:05.264 ********** 2025-06-03 16:02:32.401447 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.401457 | orchestrator | 2025-06-03 16:02:32.401471 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:02:32.401487 | orchestrator | Tuesday 03 June 2025 16:02:22 +0000 (0:00:00.653) 0:00:05.917 ********** 2025-06-03 16:02:32.401498 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.401509 | orchestrator | 2025-06-03 16:02:32.401519 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:02:32.401530 | orchestrator | Tuesday 03 June 2025 16:02:22 +0000 (0:00:00.250) 0:00:06.168 ********** 2025-06-03 16:02:32.401541 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.401552 | orchestrator | 2025-06-03 16:02:32.401562 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:32.401573 | orchestrator | Tuesday 03 June 2025 16:02:22 +0000 (0:00:00.256) 0:00:06.424 ********** 2025-06-03 16:02:32.401584 | orchestrator | 2025-06-03 16:02:32.401595 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:32.401606 | orchestrator | Tuesday 03 June 2025 16:02:22 +0000 (0:00:00.069) 0:00:06.493 ********** 2025-06-03 16:02:32.401616 | orchestrator | 2025-06-03 16:02:32.401627 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:32.401638 | orchestrator | Tuesday 03 June 2025 16:02:22 +0000 (0:00:00.069) 0:00:06.563 ********** 2025-06-03 16:02:32.401649 | orchestrator | 2025-06-03 16:02:32.401660 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:02:32.401671 | orchestrator | Tuesday 03 June 2025 16:02:22 +0000 (0:00:00.072) 0:00:06.635 ********** 2025-06-03 16:02:32.401682 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.401692 | orchestrator | 2025-06-03 16:02:32.401703 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-03 16:02:32.401714 | orchestrator | Tuesday 03 June 2025 16:02:23 +0000 (0:00:00.273) 0:00:06.908 ********** 2025-06-03 16:02:32.401725 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.401765 | orchestrator | 2025-06-03 16:02:32.401797 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-03 16:02:32.401808 | orchestrator | Tuesday 03 June 2025 16:02:23 +0000 (0:00:00.244) 0:00:07.152 ********** 2025-06-03 16:02:32.401819 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.401830 | orchestrator | 2025-06-03 16:02:32.401841 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-03 16:02:32.401851 | orchestrator | Tuesday 03 June 2025 16:02:23 +0000 (0:00:00.130) 0:00:07.283 ********** 2025-06-03 16:02:32.401862 | orchestrator | changed: [testbed-node-0] 2025-06-03 16:02:32.401873 | orchestrator | 2025-06-03 16:02:32.401884 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-03 16:02:32.401894 | orchestrator | Tuesday 03 June 2025 16:02:25 +0000 (0:00:01.744) 0:00:09.027 ********** 2025-06-03 16:02:32.401905 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.401916 | orchestrator | 2025-06-03 16:02:32.401927 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-03 16:02:32.401937 | orchestrator | Tuesday 03 June 2025 16:02:25 +0000 (0:00:00.304) 0:00:09.332 ********** 2025-06-03 16:02:32.402003 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.402077 | orchestrator | 2025-06-03 16:02:32.402090 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-03 16:02:32.402101 | orchestrator | Tuesday 03 June 2025 16:02:25 +0000 (0:00:00.315) 0:00:09.647 ********** 2025-06-03 16:02:32.402112 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.402123 | orchestrator | 2025-06-03 16:02:32.402133 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-03 16:02:32.402144 | orchestrator | Tuesday 03 June 2025 16:02:26 +0000 (0:00:00.310) 0:00:09.958 ********** 2025-06-03 16:02:32.402155 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.402166 | orchestrator | 2025-06-03 16:02:32.402177 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-03 16:02:32.402187 | orchestrator | Tuesday 03 June 2025 16:02:26 +0000 (0:00:00.312) 0:00:10.270 ********** 2025-06-03 16:02:32.402198 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.402209 | orchestrator | 2025-06-03 16:02:32.402220 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-03 16:02:32.402231 | orchestrator | Tuesday 03 June 2025 16:02:26 +0000 (0:00:00.104) 0:00:10.374 ********** 2025-06-03 16:02:32.402241 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.402252 | orchestrator | 2025-06-03 16:02:32.402263 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-03 16:02:32.402274 | orchestrator | Tuesday 03 June 2025 16:02:26 +0000 (0:00:00.130) 0:00:10.505 ********** 2025-06-03 16:02:32.402284 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.402295 | orchestrator | 2025-06-03 16:02:32.402306 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-03 16:02:32.402316 | orchestrator | Tuesday 03 June 2025 16:02:26 +0000 (0:00:00.128) 0:00:10.634 ********** 2025-06-03 16:02:32.402327 | orchestrator | changed: [testbed-node-0] 2025-06-03 16:02:32.402338 | orchestrator | 2025-06-03 16:02:32.402349 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-03 16:02:32.402359 | orchestrator | Tuesday 03 June 2025 16:02:28 +0000 (0:00:01.536) 0:00:12.170 ********** 2025-06-03 16:02:32.402370 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.402381 | orchestrator | 2025-06-03 16:02:32.402392 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-03 16:02:32.402403 | orchestrator | Tuesday 03 June 2025 16:02:28 +0000 (0:00:00.291) 0:00:12.461 ********** 2025-06-03 16:02:32.402414 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.402424 | orchestrator | 2025-06-03 16:02:32.402435 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-03 16:02:32.402446 | orchestrator | Tuesday 03 June 2025 16:02:28 +0000 (0:00:00.131) 0:00:12.593 ********** 2025-06-03 16:02:32.402457 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:32.402468 | orchestrator | 2025-06-03 16:02:32.402479 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-03 16:02:32.402489 | orchestrator | Tuesday 03 June 2025 16:02:29 +0000 (0:00:00.156) 0:00:12.750 ********** 2025-06-03 16:02:32.402500 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.402511 | orchestrator | 2025-06-03 16:02:32.402522 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-03 16:02:32.402532 | orchestrator | Tuesday 03 June 2025 16:02:29 +0000 (0:00:00.123) 0:00:12.873 ********** 2025-06-03 16:02:32.402543 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.402554 | orchestrator | 2025-06-03 16:02:32.402565 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-03 16:02:32.402576 | orchestrator | Tuesday 03 June 2025 16:02:29 +0000 (0:00:00.316) 0:00:13.189 ********** 2025-06-03 16:02:32.402586 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:32.402597 | orchestrator | 2025-06-03 16:02:32.402609 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-03 16:02:32.402619 | orchestrator | Tuesday 03 June 2025 16:02:29 +0000 (0:00:00.304) 0:00:13.494 ********** 2025-06-03 16:02:32.402637 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:32.402648 | orchestrator | 2025-06-03 16:02:32.402659 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:02:32.402670 | orchestrator | Tuesday 03 June 2025 16:02:30 +0000 (0:00:00.240) 0:00:13.734 ********** 2025-06-03 16:02:32.402681 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:32.402691 | orchestrator | 2025-06-03 16:02:32.402702 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:02:32.402713 | orchestrator | Tuesday 03 June 2025 16:02:31 +0000 (0:00:01.625) 0:00:15.360 ********** 2025-06-03 16:02:32.402724 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:32.402757 | orchestrator | 2025-06-03 16:02:32.402768 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:02:32.402779 | orchestrator | Tuesday 03 June 2025 16:02:31 +0000 (0:00:00.257) 0:00:15.617 ********** 2025-06-03 16:02:32.402790 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:32.402801 | orchestrator | 2025-06-03 16:02:32.402825 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:34.726229 | orchestrator | Tuesday 03 June 2025 16:02:32 +0000 (0:00:00.244) 0:00:15.861 ********** 2025-06-03 16:02:34.726362 | orchestrator | 2025-06-03 16:02:34.726387 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:34.726408 | orchestrator | Tuesday 03 June 2025 16:02:32 +0000 (0:00:00.081) 0:00:15.943 ********** 2025-06-03 16:02:34.726426 | orchestrator | 2025-06-03 16:02:34.726445 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:34.726464 | orchestrator | Tuesday 03 June 2025 16:02:32 +0000 (0:00:00.073) 0:00:16.017 ********** 2025-06-03 16:02:34.726482 | orchestrator | 2025-06-03 16:02:34.726501 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-03 16:02:34.726546 | orchestrator | Tuesday 03 June 2025 16:02:32 +0000 (0:00:00.075) 0:00:16.092 ********** 2025-06-03 16:02:34.726567 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:34.726586 | orchestrator | 2025-06-03 16:02:34.726606 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:02:34.726631 | orchestrator | Tuesday 03 June 2025 16:02:33 +0000 (0:00:01.456) 0:00:17.548 ********** 2025-06-03 16:02:34.726651 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-03 16:02:34.726675 | orchestrator |  "msg": [ 2025-06-03 16:02:34.726699 | orchestrator |  "Validator run completed.", 2025-06-03 16:02:34.726723 | orchestrator |  "You can find the report file here:", 2025-06-03 16:02:34.726777 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-03T16:02:17+00:00-report.json", 2025-06-03 16:02:34.726800 | orchestrator |  "on the following host:", 2025-06-03 16:02:34.726821 | orchestrator |  "testbed-manager" 2025-06-03 16:02:34.726841 | orchestrator |  ] 2025-06-03 16:02:34.726865 | orchestrator | } 2025-06-03 16:02:34.726888 | orchestrator | 2025-06-03 16:02:34.726911 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:02:34.726934 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-03 16:02:34.726959 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:34.726978 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:34.726996 | orchestrator | 2025-06-03 16:02:34.727015 | orchestrator | 2025-06-03 16:02:34.727034 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:02:34.727053 | orchestrator | Tuesday 03 June 2025 16:02:34 +0000 (0:00:00.564) 0:00:18.113 ********** 2025-06-03 16:02:34.727104 | orchestrator | =============================================================================== 2025-06-03 16:02:34.727124 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.74s 2025-06-03 16:02:34.727144 | orchestrator | Aggregate test results step one ----------------------------------------- 1.63s 2025-06-03 16:02:34.727162 | orchestrator | Gather status data ------------------------------------------------------ 1.54s 2025-06-03 16:02:34.727180 | orchestrator | Write report file ------------------------------------------------------- 1.46s 2025-06-03 16:02:34.727199 | orchestrator | Get container info ------------------------------------------------------ 1.09s 2025-06-03 16:02:34.727218 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2025-06-03 16:02:34.727237 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-06-03 16:02:34.727257 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-06-03 16:02:34.727275 | orchestrator | Print report file information ------------------------------------------- 0.56s 2025-06-03 16:02:34.727294 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2025-06-03 16:02:34.727312 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.32s 2025-06-03 16:02:34.727331 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.32s 2025-06-03 16:02:34.727348 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2025-06-03 16:02:34.727368 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-06-03 16:02:34.727387 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2025-06-03 16:02:34.727405 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-06-03 16:02:34.727424 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2025-06-03 16:02:34.727442 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2025-06-03 16:02:34.727461 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2025-06-03 16:02:34.727490 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2025-06-03 16:02:34.958983 | orchestrator | + osism validate ceph-mgrs 2025-06-03 16:02:36.648404 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:02:36.648520 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:02:36.648534 | orchestrator | Registering Redlock._release_script 2025-06-03 16:02:56.231857 | orchestrator | 2025-06-03 16:02:56.231962 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-03 16:02:56.231980 | orchestrator | 2025-06-03 16:02:56.231992 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-03 16:02:56.232004 | orchestrator | Tuesday 03 June 2025 16:02:41 +0000 (0:00:00.431) 0:00:00.431 ********** 2025-06-03 16:02:56.232015 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:56.232026 | orchestrator | 2025-06-03 16:02:56.232037 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-03 16:02:56.232048 | orchestrator | Tuesday 03 June 2025 16:02:41 +0000 (0:00:00.620) 0:00:01.052 ********** 2025-06-03 16:02:56.232058 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:56.232069 | orchestrator | 2025-06-03 16:02:56.232080 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-03 16:02:56.232091 | orchestrator | Tuesday 03 June 2025 16:02:42 +0000 (0:00:00.844) 0:00:01.896 ********** 2025-06-03 16:02:56.232102 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:56.232114 | orchestrator | 2025-06-03 16:02:56.232125 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-03 16:02:56.232136 | orchestrator | Tuesday 03 June 2025 16:02:42 +0000 (0:00:00.235) 0:00:02.132 ********** 2025-06-03 16:02:56.232146 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:56.232157 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:56.232168 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:56.232202 | orchestrator | 2025-06-03 16:02:56.232215 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-03 16:02:56.232240 | orchestrator | Tuesday 03 June 2025 16:02:43 +0000 (0:00:00.293) 0:00:02.426 ********** 2025-06-03 16:02:56.232252 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:56.232263 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:56.232273 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:56.232284 | orchestrator | 2025-06-03 16:02:56.232295 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-03 16:02:56.232305 | orchestrator | Tuesday 03 June 2025 16:02:44 +0000 (0:00:01.035) 0:00:03.462 ********** 2025-06-03 16:02:56.232319 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:56.232332 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:02:56.232346 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:02:56.232359 | orchestrator | 2025-06-03 16:02:56.232371 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-03 16:02:56.232384 | orchestrator | Tuesday 03 June 2025 16:02:44 +0000 (0:00:00.287) 0:00:03.749 ********** 2025-06-03 16:02:56.232398 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:56.232410 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:56.232423 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:56.232436 | orchestrator | 2025-06-03 16:02:56.232449 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:02:56.232461 | orchestrator | Tuesday 03 June 2025 16:02:45 +0000 (0:00:00.478) 0:00:04.228 ********** 2025-06-03 16:02:56.232473 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:56.232486 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:56.232498 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:56.232511 | orchestrator | 2025-06-03 16:02:56.232523 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-03 16:02:56.232536 | orchestrator | Tuesday 03 June 2025 16:02:45 +0000 (0:00:00.323) 0:00:04.551 ********** 2025-06-03 16:02:56.232549 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:56.232562 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:02:56.232576 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:02:56.232589 | orchestrator | 2025-06-03 16:02:56.232602 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-03 16:02:56.232614 | orchestrator | Tuesday 03 June 2025 16:02:45 +0000 (0:00:00.303) 0:00:04.855 ********** 2025-06-03 16:02:56.232626 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:56.232638 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:02:56.232651 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:02:56.232663 | orchestrator | 2025-06-03 16:02:56.232675 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:02:56.232686 | orchestrator | Tuesday 03 June 2025 16:02:46 +0000 (0:00:00.321) 0:00:05.177 ********** 2025-06-03 16:02:56.232696 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:56.232707 | orchestrator | 2025-06-03 16:02:56.232718 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:02:56.232728 | orchestrator | Tuesday 03 June 2025 16:02:46 +0000 (0:00:00.697) 0:00:05.874 ********** 2025-06-03 16:02:56.232739 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:56.232750 | orchestrator | 2025-06-03 16:02:56.232760 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:02:56.232771 | orchestrator | Tuesday 03 June 2025 16:02:46 +0000 (0:00:00.280) 0:00:06.155 ********** 2025-06-03 16:02:56.232813 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:56.232832 | orchestrator | 2025-06-03 16:02:56.232851 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:56.232870 | orchestrator | Tuesday 03 June 2025 16:02:47 +0000 (0:00:00.246) 0:00:06.402 ********** 2025-06-03 16:02:56.232889 | orchestrator | 2025-06-03 16:02:56.232902 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:56.232914 | orchestrator | Tuesday 03 June 2025 16:02:47 +0000 (0:00:00.070) 0:00:06.473 ********** 2025-06-03 16:02:56.232932 | orchestrator | 2025-06-03 16:02:56.232943 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:56.232954 | orchestrator | Tuesday 03 June 2025 16:02:47 +0000 (0:00:00.071) 0:00:06.544 ********** 2025-06-03 16:02:56.232964 | orchestrator | 2025-06-03 16:02:56.232975 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:02:56.232986 | orchestrator | Tuesday 03 June 2025 16:02:47 +0000 (0:00:00.072) 0:00:06.617 ********** 2025-06-03 16:02:56.232996 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:56.233007 | orchestrator | 2025-06-03 16:02:56.233026 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-03 16:02:56.233050 | orchestrator | Tuesday 03 June 2025 16:02:47 +0000 (0:00:00.264) 0:00:06.881 ********** 2025-06-03 16:02:56.233095 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:56.233130 | orchestrator | 2025-06-03 16:02:56.233170 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-03 16:02:56.233183 | orchestrator | Tuesday 03 June 2025 16:02:47 +0000 (0:00:00.284) 0:00:07.165 ********** 2025-06-03 16:02:56.233194 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:56.233205 | orchestrator | 2025-06-03 16:02:56.233216 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-03 16:02:56.233226 | orchestrator | Tuesday 03 June 2025 16:02:48 +0000 (0:00:00.126) 0:00:07.292 ********** 2025-06-03 16:02:56.233237 | orchestrator | changed: [testbed-node-0] 2025-06-03 16:02:56.233248 | orchestrator | 2025-06-03 16:02:56.233258 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-03 16:02:56.233269 | orchestrator | Tuesday 03 June 2025 16:02:50 +0000 (0:00:02.113) 0:00:09.405 ********** 2025-06-03 16:02:56.233280 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:56.233290 | orchestrator | 2025-06-03 16:02:56.233301 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-03 16:02:56.233311 | orchestrator | Tuesday 03 June 2025 16:02:50 +0000 (0:00:00.277) 0:00:09.683 ********** 2025-06-03 16:02:56.233322 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:56.233333 | orchestrator | 2025-06-03 16:02:56.233343 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-03 16:02:56.233354 | orchestrator | Tuesday 03 June 2025 16:02:51 +0000 (0:00:00.744) 0:00:10.427 ********** 2025-06-03 16:02:56.233364 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:56.233376 | orchestrator | 2025-06-03 16:02:56.233387 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-03 16:02:56.233398 | orchestrator | Tuesday 03 June 2025 16:02:51 +0000 (0:00:00.127) 0:00:10.555 ********** 2025-06-03 16:02:56.233408 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:02:56.233419 | orchestrator | 2025-06-03 16:02:56.233430 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-03 16:02:56.233440 | orchestrator | Tuesday 03 June 2025 16:02:51 +0000 (0:00:00.153) 0:00:10.709 ********** 2025-06-03 16:02:56.233451 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:56.233462 | orchestrator | 2025-06-03 16:02:56.233473 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-03 16:02:56.233483 | orchestrator | Tuesday 03 June 2025 16:02:51 +0000 (0:00:00.241) 0:00:10.950 ********** 2025-06-03 16:02:56.233494 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:02:56.233504 | orchestrator | 2025-06-03 16:02:56.233515 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:02:56.233526 | orchestrator | Tuesday 03 June 2025 16:02:52 +0000 (0:00:00.251) 0:00:11.202 ********** 2025-06-03 16:02:56.233537 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:56.233547 | orchestrator | 2025-06-03 16:02:56.233558 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:02:56.233569 | orchestrator | Tuesday 03 June 2025 16:02:53 +0000 (0:00:01.265) 0:00:12.467 ********** 2025-06-03 16:02:56.233579 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:56.233600 | orchestrator | 2025-06-03 16:02:56.233611 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:02:56.233622 | orchestrator | Tuesday 03 June 2025 16:02:53 +0000 (0:00:00.242) 0:00:12.710 ********** 2025-06-03 16:02:56.233633 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:56.233643 | orchestrator | 2025-06-03 16:02:56.233654 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:56.233665 | orchestrator | Tuesday 03 June 2025 16:02:53 +0000 (0:00:00.251) 0:00:12.962 ********** 2025-06-03 16:02:56.233675 | orchestrator | 2025-06-03 16:02:56.233686 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:56.233697 | orchestrator | Tuesday 03 June 2025 16:02:53 +0000 (0:00:00.071) 0:00:13.034 ********** 2025-06-03 16:02:56.233708 | orchestrator | 2025-06-03 16:02:56.233718 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:02:56.233729 | orchestrator | Tuesday 03 June 2025 16:02:53 +0000 (0:00:00.070) 0:00:13.104 ********** 2025-06-03 16:02:56.233739 | orchestrator | 2025-06-03 16:02:56.233750 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-03 16:02:56.233761 | orchestrator | Tuesday 03 June 2025 16:02:54 +0000 (0:00:00.071) 0:00:13.176 ********** 2025-06-03 16:02:56.233772 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:02:56.233871 | orchestrator | 2025-06-03 16:02:56.233886 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:02:56.233897 | orchestrator | Tuesday 03 June 2025 16:02:55 +0000 (0:00:01.786) 0:00:14.962 ********** 2025-06-03 16:02:56.233908 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-03 16:02:56.233919 | orchestrator |  "msg": [ 2025-06-03 16:02:56.233931 | orchestrator |  "Validator run completed.", 2025-06-03 16:02:56.233942 | orchestrator |  "You can find the report file here:", 2025-06-03 16:02:56.233953 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-03T16:02:41+00:00-report.json", 2025-06-03 16:02:56.233965 | orchestrator |  "on the following host:", 2025-06-03 16:02:56.233976 | orchestrator |  "testbed-manager" 2025-06-03 16:02:56.233987 | orchestrator |  ] 2025-06-03 16:02:56.233999 | orchestrator | } 2025-06-03 16:02:56.234085 | orchestrator | 2025-06-03 16:02:56.234111 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:02:56.234131 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-03 16:02:56.234154 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:56.234199 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:02:56.559255 | orchestrator | 2025-06-03 16:02:56.559358 | orchestrator | 2025-06-03 16:02:56.559373 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:02:56.559386 | orchestrator | Tuesday 03 June 2025 16:02:56 +0000 (0:00:00.420) 0:00:15.383 ********** 2025-06-03 16:02:56.559397 | orchestrator | =============================================================================== 2025-06-03 16:02:56.559409 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.11s 2025-06-03 16:02:56.559420 | orchestrator | Write report file ------------------------------------------------------- 1.79s 2025-06-03 16:02:56.559430 | orchestrator | Aggregate test results step one ----------------------------------------- 1.27s 2025-06-03 16:02:56.559441 | orchestrator | Get container info ------------------------------------------------------ 1.04s 2025-06-03 16:02:56.559452 | orchestrator | Create report output directory ------------------------------------------ 0.84s 2025-06-03 16:02:56.559463 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.74s 2025-06-03 16:02:56.559502 | orchestrator | Aggregate test results step one ----------------------------------------- 0.70s 2025-06-03 16:02:56.559513 | orchestrator | Get timestamp for report file ------------------------------------------- 0.62s 2025-06-03 16:02:56.559524 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2025-06-03 16:02:56.559534 | orchestrator | Print report file information ------------------------------------------- 0.42s 2025-06-03 16:02:56.559559 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-06-03 16:02:56.559571 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2025-06-03 16:02:56.559582 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2025-06-03 16:02:56.559592 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-06-03 16:02:56.559629 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-06-03 16:02:56.559652 | orchestrator | Fail due to missing containers ------------------------------------------ 0.28s 2025-06-03 16:02:56.559663 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2025-06-03 16:02:56.559673 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.28s 2025-06-03 16:02:56.559684 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-06-03 16:02:56.559695 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.25s 2025-06-03 16:02:56.797080 | orchestrator | + osism validate ceph-osds 2025-06-03 16:02:58.502332 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:02:58.502457 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:02:58.502480 | orchestrator | Registering Redlock._release_script 2025-06-03 16:03:07.379499 | orchestrator | 2025-06-03 16:03:07.379581 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-03 16:03:07.379588 | orchestrator | 2025-06-03 16:03:07.379592 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-03 16:03:07.379597 | orchestrator | Tuesday 03 June 2025 16:03:02 +0000 (0:00:00.448) 0:00:00.448 ********** 2025-06-03 16:03:07.379602 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:03:07.379607 | orchestrator | 2025-06-03 16:03:07.379611 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 16:03:07.379615 | orchestrator | Tuesday 03 June 2025 16:03:03 +0000 (0:00:00.664) 0:00:01.112 ********** 2025-06-03 16:03:07.379618 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:03:07.379622 | orchestrator | 2025-06-03 16:03:07.379626 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-03 16:03:07.379630 | orchestrator | Tuesday 03 June 2025 16:03:03 +0000 (0:00:00.409) 0:00:01.522 ********** 2025-06-03 16:03:07.379634 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:03:07.379638 | orchestrator | 2025-06-03 16:03:07.379641 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-03 16:03:07.379645 | orchestrator | Tuesday 03 June 2025 16:03:04 +0000 (0:00:00.945) 0:00:02.468 ********** 2025-06-03 16:03:07.379649 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:07.379654 | orchestrator | 2025-06-03 16:03:07.379658 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-03 16:03:07.379661 | orchestrator | Tuesday 03 June 2025 16:03:05 +0000 (0:00:00.159) 0:00:02.627 ********** 2025-06-03 16:03:07.379665 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:07.379669 | orchestrator | 2025-06-03 16:03:07.379673 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-03 16:03:07.379677 | orchestrator | Tuesday 03 June 2025 16:03:05 +0000 (0:00:00.148) 0:00:02.776 ********** 2025-06-03 16:03:07.379680 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:07.379684 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:03:07.379688 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:03:07.379692 | orchestrator | 2025-06-03 16:03:07.379709 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-03 16:03:07.379713 | orchestrator | Tuesday 03 June 2025 16:03:05 +0000 (0:00:00.321) 0:00:03.098 ********** 2025-06-03 16:03:07.379717 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:07.379720 | orchestrator | 2025-06-03 16:03:07.379724 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-03 16:03:07.379728 | orchestrator | Tuesday 03 June 2025 16:03:05 +0000 (0:00:00.150) 0:00:03.248 ********** 2025-06-03 16:03:07.379732 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:07.379735 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:07.379739 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:07.379743 | orchestrator | 2025-06-03 16:03:07.379747 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-03 16:03:07.379750 | orchestrator | Tuesday 03 June 2025 16:03:06 +0000 (0:00:00.314) 0:00:03.563 ********** 2025-06-03 16:03:07.379754 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:07.379758 | orchestrator | 2025-06-03 16:03:07.379762 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:03:07.379765 | orchestrator | Tuesday 03 June 2025 16:03:06 +0000 (0:00:00.588) 0:00:04.152 ********** 2025-06-03 16:03:07.379769 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:07.379773 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:07.379777 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:07.379780 | orchestrator | 2025-06-03 16:03:07.379784 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-03 16:03:07.379788 | orchestrator | Tuesday 03 June 2025 16:03:07 +0000 (0:00:00.530) 0:00:04.682 ********** 2025-06-03 16:03:07.379810 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c63f69c88df856b8a7da773a759e077f3848a2fc51c2186678a0a16270a33d7b', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-03 16:03:07.379817 | orchestrator | skipping: [testbed-node-3] => (item={'id': '05fb56ad92acb954221d9468e40f13da89c72a377a0e12cc15ce114d9a0d9ec9', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-03 16:03:07.379832 | orchestrator | skipping: [testbed-node-3] => (item={'id': '735c174a82516ecf31ba3fe45f67987ddd7bd602e1f783763823b6b9e9f592c8', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-03 16:03:07.379837 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ea4d88fed7ad6c8d48d8fa13fa92bc111be66b1fd59b1ccda9b0b4a0fc64062c', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-03 16:03:07.379843 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0d12f5c57ba5d440dc2417050707b95c6bd0d6e907dafbc178d7823c0057e464', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-03 16:03:07.379857 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a78376f5d398a52bbb1f7a891d0962fbab2565e7d28b164421cdc03f7f50dc15', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-03 16:03:07.379861 | orchestrator | skipping: [testbed-node-3] => (item={'id': '841f07a0806ea28219bc83863533fc1a8a01f9db411ec8dbdde62af64036bc83', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-03 16:03:07.379870 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6289579be9acfa9a11f53f8e768fb8f508dbab927a2d8037f8ea6191bd11e0ae', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-03 16:03:07.379878 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ac092e9bc5d8dbe332a1a746627dc43e5b906d64508d2593c2ceb27b741fe9fa', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-03 16:03:07.379882 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1be716e80c1e3e3d65dcb63314ba24d6789d06d0bc9092752c5fe6d1e68e6ed7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-03 16:03:07.379887 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'de04693a2dde717ee5ac5637ffe293219c855073e367d1951697ea0f6725c09e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-03 16:03:07.379892 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2d87fe6f4f086cf7d238fa03b35ac697d198a9b5d2066087b69b796ddc1f8bf4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-03 16:03:07.379897 | orchestrator | ok: [testbed-node-3] => (item={'id': '33b350452a414a96a1e04fd425cee261fddf246c05734be623ce5e968f3ccfee', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-03 16:03:07.379902 | orchestrator | ok: [testbed-node-3] => (item={'id': '1d618ea791df6cbac19fdde6bb8608f870292189b68ce2b2dcdcc62d5d548713', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-03 16:03:07.379906 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f394fdc186ad916e3528c2bd676f6d858be5e20a3087ebbc5ed53f728d119953', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-03 16:03:07.379910 | orchestrator | skipping: [testbed-node-3] => (item={'id': '010be5109e661066152f88c332cf1d3684da0d71d15cd807fae0ce0f9fb3b361', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-03 16:03:07.379914 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7db19c64717fef4078eba5629992811617a411b22e8c489bba92ba9cd0542d3a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-03 16:03:07.379919 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4fa2a2c09bcc92d7e4d258b09f54e427f5f76f95ad4d54835e3e658bc6dff7dc', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-03 16:03:07.379923 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6cd4da4e361cea8570cf499b136547e61415c3c783286a12acf79d1f36290ea5', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-03 16:03:07.379927 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0ff513aa917e74a4362b99f9a3d61098ec9a0a2d13a8b22fea5ebd8f720a2653', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-03 16:03:07.379934 | orchestrator | skipping: [testbed-node-4] => (item={'id': '39a7bda48574b21a55a5b3eb3777ec656991e4bfeccd507c75af7b3eb025814e', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-03 16:03:07.488436 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3f3a0c25b0629ce6916321e9de55ec751709a72b8ab3305738570f4447787951', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-03 16:03:07.488556 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0892e504505e53cfb602d6a1fcaf63f9b942c3cf3f2b12f2bbd74e2df76d4c80', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-03 16:03:07.488571 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0b086ba34ef8d1d5e96e368fa1a6a646f0f370ea21f8b08aa8ab6a846147ccb3', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-03 16:03:07.488583 | orchestrator | skipping: [testbed-node-4] => (item={'id': '20cd626a5fed548a605db546e89b0e84ff926fceef7cc0932ce32e50c7d8039f', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-03 16:03:07.488593 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e2ad941d4d730b89900c2ae4d7aca54e9ecf10a5280cca4490e6080bb7092696', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-03 16:03:07.488603 | orchestrator | skipping: [testbed-node-4] => (item={'id': '798658c0cfb8291e28bc6f7fd0493084c028a4d3c94d503f4e0e9e27368134a2', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-03 16:03:07.488614 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd882e545e491be4b4f684e9562242a3e6e85339b18dfaebbf42749b502930c7c', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-03 16:03:07.488624 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2ead5c51f42f14a57d82c8aef0d85a368ae232c3919353b30b4531ffe12f73e7', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-03 16:03:07.488650 | orchestrator | skipping: [testbed-node-4] => (item={'id': '63058801166e3a6912c56d672f2f67ee061b509904955cb46014bac723fc92f2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-03 16:03:07.488662 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2c52f2f16948e0f8ed80285365011319755cb501f58075ed3d5b23e63b64892f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-03 16:03:07.488677 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f0c0e1a35284801d6bc78a0add912bd9f8ce815896d0e70bab7bcb8db7ffba22', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-03 16:03:07.488689 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c2d5cf41248794a763dbf6d0ba218dbd7ff5caa8a91142e2fa47c92fa8664c24', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-03 16:03:07.488700 | orchestrator | ok: [testbed-node-4] => (item={'id': '84b16b8d2be6cda02344998b4db1ad06225a795cdcec831c3726c546bd188bd3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-03 16:03:07.488711 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f73c63a3f42f8bdd5e1d5070c802e8efab8e0e941fac1aeac244b3d47c30403c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-03 16:03:07.488742 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a17e259cada15947a68fa528de5085961efb8c55078dcff5f6935cacc9cd1c6f', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-03 16:03:07.488752 | orchestrator | skipping: [testbed-node-4] => (item={'id': '23f5bf62ef50207f6d278caed876c7815a60aea7a3c926179e48c9e6267d2b7d', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-03 16:03:07.488763 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b1f38205eb8d8697117da4943f1eebaa876198a187731a94873031ee5ad3f1a0', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-03 16:03:07.488773 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1c455b6ce68baec2b4ef817578182746a30e218c0e0ade556497e0b558bb2a6b', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-03 16:03:07.488783 | orchestrator | skipping: [testbed-node-4] => (item={'id': '66522fc6ed7b8ad0313746437df69c7055d281ec4fde5b9fca70269c7378a127', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-03 16:03:07.488835 | orchestrator | skipping: [testbed-node-5] => (item={'id': '067d084734d40cfae838ab0001eeaa927890edc410ee879e6c520b24dfe4e128', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-03 16:03:07.488847 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8ea000e3b92927d3e89f27c7da90e5e90e035104aa26a11d6e49686c412d3bef', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-03 16:03:07.488856 | orchestrator | skipping: [testbed-node-5] => (item={'id': '75a5f2a1df1765c2ca7d71ed9d8b86f3470871ef8b83aa26c6fe9ac9f2f4c522', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-03 16:03:07.488866 | orchestrator | skipping: [testbed-node-5] => (item={'id': '266dd175b8f5c8682600cec614e63622b1fd58ef520c7a1ada1ebde3fe30210d', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-03 16:03:07.488876 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0a9353817186625b123d0731a366d51b94a466f86631d36619e4527bdb196bfd', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-03 16:03:07.488886 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a488abec9bce54f11774559959f5943abb62c29ed82f6d9567e95286d1ae9351', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-03 16:03:07.488901 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4ff49b7fdd9043d7b363be1f0069f7bb4f4f9b2bb638457ad6460d281a0981d1', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-03 16:03:07.488911 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6f2be8ec109a43b32d16a1aecb26cec76a5dfb361855ba097de66c13dbce4d16', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-03 16:03:07.488927 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd92b6a5fc4bb4495bec6339517e4fc6b88e9a8e9aa157eb40f93a4667a1eeeea', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-03 16:03:07.488937 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd802cfc8fd6fb4a61f5f43843c18a561a6ec5347bdd11261a91376d856c96f4e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-03 16:03:07.488954 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd5740bad0a09d018b1123cdc83ba27990e549abf1063e3ae5e5c022a44529fc9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-03 16:03:15.892602 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f7cdfd63bb93fc7668bad8ce519a74c7e01ca06561db10914a0853ea7e72b169', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-03 16:03:15.892719 | orchestrator | ok: [testbed-node-5] => (item={'id': '49acb78afd5750399d868fb0fa80ddfad413af28d5f0274ae6947a3fabce2d94', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-03 16:03:15.892736 | orchestrator | ok: [testbed-node-5] => (item={'id': '0a04aeede67132624f0a3ec621199dda1de5e67ac4acc7d7bd3898dd52c3a946', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-03 16:03:15.892748 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3b12fdc409ad6b229caee86fc332b178f70118d6e089231b6503e9f87d4160ad', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-03 16:03:15.892761 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6e10fcf9ba712a7c4fa4e7a9a70f34bf054d03733ab8e9c180f81daa40fc09b3', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-03 16:03:15.892775 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3f3cf893794c1a8a3a488bc1f0404d3332ffd1d3073c9eab0ca65b00048f5fe9', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-03 16:03:15.892786 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f70e059cbc0345ae270cbcb468f37a8a27f196637093f7232cac1ce220a2acf8', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-03 16:03:15.892797 | orchestrator | skipping: [testbed-node-5] => (item={'id': '783714bb444ceff7865da88bb6bf5a22a45dc4b4770327a5b2337e05453c1908', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-03 16:03:15.892864 | orchestrator | skipping: [testbed-node-5] => (item={'id': '05ec4ca962d10698868970ff230d698b478883e8aba78939be6a674a8b02f8a3', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-03 16:03:15.892879 | orchestrator | 2025-06-03 16:03:15.892893 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-03 16:03:15.892906 | orchestrator | Tuesday 03 June 2025 16:03:07 +0000 (0:00:00.459) 0:00:05.142 ********** 2025-06-03 16:03:15.892917 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:15.892929 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:15.892940 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:15.892976 | orchestrator | 2025-06-03 16:03:15.892987 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-03 16:03:15.892998 | orchestrator | Tuesday 03 June 2025 16:03:07 +0000 (0:00:00.297) 0:00:05.440 ********** 2025-06-03 16:03:15.893009 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:15.893037 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:03:15.893048 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:03:15.893059 | orchestrator | 2025-06-03 16:03:15.893070 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-03 16:03:15.893081 | orchestrator | Tuesday 03 June 2025 16:03:08 +0000 (0:00:00.484) 0:00:05.924 ********** 2025-06-03 16:03:15.893092 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:15.893105 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:15.893118 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:15.893130 | orchestrator | 2025-06-03 16:03:15.893142 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:03:15.893155 | orchestrator | Tuesday 03 June 2025 16:03:08 +0000 (0:00:00.341) 0:00:06.265 ********** 2025-06-03 16:03:15.893168 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:15.893181 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:15.893193 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:15.893206 | orchestrator | 2025-06-03 16:03:15.893219 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-03 16:03:15.893232 | orchestrator | Tuesday 03 June 2025 16:03:09 +0000 (0:00:00.301) 0:00:06.566 ********** 2025-06-03 16:03:15.893248 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-03 16:03:15.893269 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-03 16:03:15.893289 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:15.893308 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-03 16:03:15.893326 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-03 16:03:15.893367 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:03:15.893388 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-03 16:03:15.893407 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-03 16:03:15.893427 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:03:15.893446 | orchestrator | 2025-06-03 16:03:15.893466 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-03 16:03:15.893486 | orchestrator | Tuesday 03 June 2025 16:03:09 +0000 (0:00:00.328) 0:00:06.895 ********** 2025-06-03 16:03:15.893500 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:15.893511 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:15.893521 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:15.893532 | orchestrator | 2025-06-03 16:03:15.893543 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-03 16:03:15.893554 | orchestrator | Tuesday 03 June 2025 16:03:09 +0000 (0:00:00.482) 0:00:07.377 ********** 2025-06-03 16:03:15.893564 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:15.893575 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:03:15.893586 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:03:15.893596 | orchestrator | 2025-06-03 16:03:15.893607 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-03 16:03:15.893618 | orchestrator | Tuesday 03 June 2025 16:03:10 +0000 (0:00:00.292) 0:00:07.670 ********** 2025-06-03 16:03:15.893629 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:15.893639 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:03:15.893650 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:03:15.893661 | orchestrator | 2025-06-03 16:03:15.893672 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-03 16:03:15.893682 | orchestrator | Tuesday 03 June 2025 16:03:10 +0000 (0:00:00.322) 0:00:07.992 ********** 2025-06-03 16:03:15.893703 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:15.893715 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:15.893725 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:15.893736 | orchestrator | 2025-06-03 16:03:15.893747 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:03:15.893758 | orchestrator | Tuesday 03 June 2025 16:03:10 +0000 (0:00:00.304) 0:00:08.296 ********** 2025-06-03 16:03:15.893769 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:15.893780 | orchestrator | 2025-06-03 16:03:15.893790 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:03:15.893801 | orchestrator | Tuesday 03 June 2025 16:03:11 +0000 (0:00:00.689) 0:00:08.985 ********** 2025-06-03 16:03:15.893839 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:15.893851 | orchestrator | 2025-06-03 16:03:15.893862 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:03:15.893873 | orchestrator | Tuesday 03 June 2025 16:03:11 +0000 (0:00:00.245) 0:00:09.231 ********** 2025-06-03 16:03:15.893884 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:15.893894 | orchestrator | 2025-06-03 16:03:15.893905 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:03:15.893916 | orchestrator | Tuesday 03 June 2025 16:03:11 +0000 (0:00:00.243) 0:00:09.475 ********** 2025-06-03 16:03:15.893926 | orchestrator | 2025-06-03 16:03:15.893937 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:03:15.893948 | orchestrator | Tuesday 03 June 2025 16:03:12 +0000 (0:00:00.081) 0:00:09.557 ********** 2025-06-03 16:03:15.893958 | orchestrator | 2025-06-03 16:03:15.893969 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:03:15.893980 | orchestrator | Tuesday 03 June 2025 16:03:12 +0000 (0:00:00.071) 0:00:09.628 ********** 2025-06-03 16:03:15.893991 | orchestrator | 2025-06-03 16:03:15.894001 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:03:15.894012 | orchestrator | Tuesday 03 June 2025 16:03:12 +0000 (0:00:00.069) 0:00:09.698 ********** 2025-06-03 16:03:15.894090 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:15.894102 | orchestrator | 2025-06-03 16:03:15.894112 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-03 16:03:15.894123 | orchestrator | Tuesday 03 June 2025 16:03:12 +0000 (0:00:00.257) 0:00:09.956 ********** 2025-06-03 16:03:15.894134 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:15.894144 | orchestrator | 2025-06-03 16:03:15.894156 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:03:15.894167 | orchestrator | Tuesday 03 June 2025 16:03:12 +0000 (0:00:00.234) 0:00:10.191 ********** 2025-06-03 16:03:15.894177 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:15.894188 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:15.894199 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:15.894210 | orchestrator | 2025-06-03 16:03:15.894221 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-03 16:03:15.894231 | orchestrator | Tuesday 03 June 2025 16:03:12 +0000 (0:00:00.285) 0:00:10.476 ********** 2025-06-03 16:03:15.894242 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:15.894253 | orchestrator | 2025-06-03 16:03:15.894263 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-03 16:03:15.894274 | orchestrator | Tuesday 03 June 2025 16:03:13 +0000 (0:00:00.683) 0:00:11.160 ********** 2025-06-03 16:03:15.894285 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 16:03:15.894296 | orchestrator | 2025-06-03 16:03:15.894307 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-03 16:03:15.894318 | orchestrator | Tuesday 03 June 2025 16:03:15 +0000 (0:00:01.731) 0:00:12.891 ********** 2025-06-03 16:03:15.894328 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:15.894339 | orchestrator | 2025-06-03 16:03:15.894350 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-03 16:03:15.894369 | orchestrator | Tuesday 03 June 2025 16:03:15 +0000 (0:00:00.128) 0:00:13.020 ********** 2025-06-03 16:03:15.894380 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:15.894391 | orchestrator | 2025-06-03 16:03:15.894402 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-03 16:03:15.894412 | orchestrator | Tuesday 03 June 2025 16:03:15 +0000 (0:00:00.293) 0:00:13.313 ********** 2025-06-03 16:03:15.894437 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:28.150745 | orchestrator | 2025-06-03 16:03:28.150980 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-03 16:03:28.151010 | orchestrator | Tuesday 03 June 2025 16:03:15 +0000 (0:00:00.113) 0:00:13.426 ********** 2025-06-03 16:03:28.151030 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:28.151049 | orchestrator | 2025-06-03 16:03:28.151067 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:03:28.151083 | orchestrator | Tuesday 03 June 2025 16:03:16 +0000 (0:00:00.129) 0:00:13.556 ********** 2025-06-03 16:03:28.151102 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:28.151188 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:28.151211 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:28.151229 | orchestrator | 2025-06-03 16:03:28.151249 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-03 16:03:28.151268 | orchestrator | Tuesday 03 June 2025 16:03:16 +0000 (0:00:00.269) 0:00:13.826 ********** 2025-06-03 16:03:28.151287 | orchestrator | changed: [testbed-node-3] 2025-06-03 16:03:28.151308 | orchestrator | changed: [testbed-node-4] 2025-06-03 16:03:28.151326 | orchestrator | changed: [testbed-node-5] 2025-06-03 16:03:28.151345 | orchestrator | 2025-06-03 16:03:28.151364 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-03 16:03:28.151384 | orchestrator | Tuesday 03 June 2025 16:03:18 +0000 (0:00:02.583) 0:00:16.409 ********** 2025-06-03 16:03:28.151402 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:28.151421 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:28.151440 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:28.151458 | orchestrator | 2025-06-03 16:03:28.151476 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-03 16:03:28.151494 | orchestrator | Tuesday 03 June 2025 16:03:19 +0000 (0:00:00.301) 0:00:16.711 ********** 2025-06-03 16:03:28.151511 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:28.151530 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:28.151550 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:28.151567 | orchestrator | 2025-06-03 16:03:28.151586 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-03 16:03:28.151604 | orchestrator | Tuesday 03 June 2025 16:03:19 +0000 (0:00:00.486) 0:00:17.197 ********** 2025-06-03 16:03:28.151622 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:28.151640 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:03:28.151659 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:03:28.151676 | orchestrator | 2025-06-03 16:03:28.151693 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-03 16:03:28.151712 | orchestrator | Tuesday 03 June 2025 16:03:19 +0000 (0:00:00.295) 0:00:17.493 ********** 2025-06-03 16:03:28.151729 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:28.151746 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:28.151764 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:28.151782 | orchestrator | 2025-06-03 16:03:28.151800 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-03 16:03:28.151818 | orchestrator | Tuesday 03 June 2025 16:03:20 +0000 (0:00:00.487) 0:00:17.981 ********** 2025-06-03 16:03:28.151866 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:28.151886 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:03:28.151897 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:03:28.151971 | orchestrator | 2025-06-03 16:03:28.151983 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-03 16:03:28.151994 | orchestrator | Tuesday 03 June 2025 16:03:20 +0000 (0:00:00.284) 0:00:18.265 ********** 2025-06-03 16:03:28.152029 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:28.152040 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:03:28.152051 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:03:28.152062 | orchestrator | 2025-06-03 16:03:28.152073 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:03:28.152083 | orchestrator | Tuesday 03 June 2025 16:03:20 +0000 (0:00:00.276) 0:00:18.541 ********** 2025-06-03 16:03:28.152094 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:28.152105 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:28.152115 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:28.152126 | orchestrator | 2025-06-03 16:03:28.152136 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-03 16:03:28.152147 | orchestrator | Tuesday 03 June 2025 16:03:21 +0000 (0:00:00.471) 0:00:19.012 ********** 2025-06-03 16:03:28.152158 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:28.152169 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:28.152186 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:28.152197 | orchestrator | 2025-06-03 16:03:28.152208 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-03 16:03:28.152219 | orchestrator | Tuesday 03 June 2025 16:03:22 +0000 (0:00:00.672) 0:00:19.685 ********** 2025-06-03 16:03:28.152230 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:28.152240 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:28.152251 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:28.152262 | orchestrator | 2025-06-03 16:03:28.152273 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-03 16:03:28.152283 | orchestrator | Tuesday 03 June 2025 16:03:22 +0000 (0:00:00.318) 0:00:20.004 ********** 2025-06-03 16:03:28.152294 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:28.152305 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:03:28.152315 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:03:28.152326 | orchestrator | 2025-06-03 16:03:28.152337 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-03 16:03:28.152348 | orchestrator | Tuesday 03 June 2025 16:03:22 +0000 (0:00:00.310) 0:00:20.315 ********** 2025-06-03 16:03:28.152366 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:03:28.152389 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:03:28.152417 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:03:28.152433 | orchestrator | 2025-06-03 16:03:28.152450 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-03 16:03:28.152467 | orchestrator | Tuesday 03 June 2025 16:03:23 +0000 (0:00:00.290) 0:00:20.605 ********** 2025-06-03 16:03:28.152483 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:03:28.152500 | orchestrator | 2025-06-03 16:03:28.152518 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-03 16:03:28.152534 | orchestrator | Tuesday 03 June 2025 16:03:23 +0000 (0:00:00.641) 0:00:21.247 ********** 2025-06-03 16:03:28.152552 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:03:28.152569 | orchestrator | 2025-06-03 16:03:28.152617 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:03:28.152636 | orchestrator | Tuesday 03 June 2025 16:03:23 +0000 (0:00:00.235) 0:00:21.483 ********** 2025-06-03 16:03:28.152654 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:03:28.152675 | orchestrator | 2025-06-03 16:03:28.152693 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:03:28.152711 | orchestrator | Tuesday 03 June 2025 16:03:25 +0000 (0:00:01.615) 0:00:23.098 ********** 2025-06-03 16:03:28.152725 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:03:28.152736 | orchestrator | 2025-06-03 16:03:28.152748 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:03:28.152758 | orchestrator | Tuesday 03 June 2025 16:03:25 +0000 (0:00:00.244) 0:00:23.343 ********** 2025-06-03 16:03:28.152769 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:03:28.152791 | orchestrator | 2025-06-03 16:03:28.152802 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:03:28.152812 | orchestrator | Tuesday 03 June 2025 16:03:26 +0000 (0:00:00.244) 0:00:23.588 ********** 2025-06-03 16:03:28.152823 | orchestrator | 2025-06-03 16:03:28.152860 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:03:28.152871 | orchestrator | Tuesday 03 June 2025 16:03:26 +0000 (0:00:00.067) 0:00:23.656 ********** 2025-06-03 16:03:28.152882 | orchestrator | 2025-06-03 16:03:28.152893 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:03:28.152903 | orchestrator | Tuesday 03 June 2025 16:03:26 +0000 (0:00:00.068) 0:00:23.724 ********** 2025-06-03 16:03:28.152914 | orchestrator | 2025-06-03 16:03:28.152925 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-03 16:03:28.152935 | orchestrator | Tuesday 03 June 2025 16:03:26 +0000 (0:00:00.072) 0:00:23.797 ********** 2025-06-03 16:03:28.152946 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:03:28.152957 | orchestrator | 2025-06-03 16:03:28.152967 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:03:28.152978 | orchestrator | Tuesday 03 June 2025 16:03:27 +0000 (0:00:01.261) 0:00:25.058 ********** 2025-06-03 16:03:28.152989 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-03 16:03:28.152999 | orchestrator |  "msg": [ 2025-06-03 16:03:28.153011 | orchestrator |  "Validator run completed.", 2025-06-03 16:03:28.153022 | orchestrator |  "You can find the report file here:", 2025-06-03 16:03:28.153033 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-03T16:03:03+00:00-report.json", 2025-06-03 16:03:28.153045 | orchestrator |  "on the following host:", 2025-06-03 16:03:28.153056 | orchestrator |  "testbed-manager" 2025-06-03 16:03:28.153067 | orchestrator |  ] 2025-06-03 16:03:28.153078 | orchestrator | } 2025-06-03 16:03:28.153089 | orchestrator | 2025-06-03 16:03:28.153100 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:03:28.153113 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-03 16:03:28.153125 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-03 16:03:28.153135 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-03 16:03:28.153146 | orchestrator | 2025-06-03 16:03:28.153157 | orchestrator | 2025-06-03 16:03:28.153168 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:03:28.153179 | orchestrator | Tuesday 03 June 2025 16:03:28 +0000 (0:00:00.597) 0:00:25.655 ********** 2025-06-03 16:03:28.153189 | orchestrator | =============================================================================== 2025-06-03 16:03:28.153207 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.58s 2025-06-03 16:03:28.153218 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.73s 2025-06-03 16:03:28.153228 | orchestrator | Aggregate test results step one ----------------------------------------- 1.62s 2025-06-03 16:03:28.153239 | orchestrator | Write report file ------------------------------------------------------- 1.26s 2025-06-03 16:03:28.153250 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2025-06-03 16:03:28.153260 | orchestrator | Aggregate test results step one ----------------------------------------- 0.69s 2025-06-03 16:03:28.153271 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.68s 2025-06-03 16:03:28.153281 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.67s 2025-06-03 16:03:28.153292 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-06-03 16:03:28.153310 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.64s 2025-06-03 16:03:28.153321 | orchestrator | Print report file information ------------------------------------------- 0.60s 2025-06-03 16:03:28.153331 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.59s 2025-06-03 16:03:28.153342 | orchestrator | Prepare test data ------------------------------------------------------- 0.53s 2025-06-03 16:03:28.153353 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.49s 2025-06-03 16:03:28.153363 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-06-03 16:03:28.153374 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.48s 2025-06-03 16:03:28.153393 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.48s 2025-06-03 16:03:28.410332 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-06-03 16:03:28.410437 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.46s 2025-06-03 16:03:28.410451 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.41s 2025-06-03 16:03:28.657453 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-03 16:03:28.665662 | orchestrator | + set -e 2025-06-03 16:03:28.665912 | orchestrator | + source /opt/manager-vars.sh 2025-06-03 16:03:28.665977 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-03 16:03:28.665997 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-03 16:03:28.666082 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-03 16:03:28.666097 | orchestrator | ++ CEPH_VERSION=reef 2025-06-03 16:03:28.666109 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-03 16:03:28.666122 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-03 16:03:28.666133 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-03 16:03:28.666144 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-03 16:03:28.666155 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-03 16:03:28.666166 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-03 16:03:28.666176 | orchestrator | ++ export ARA=false 2025-06-03 16:03:28.666187 | orchestrator | ++ ARA=false 2025-06-03 16:03:28.666198 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-03 16:03:28.666209 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-03 16:03:28.666220 | orchestrator | ++ export TEMPEST=false 2025-06-03 16:03:28.666230 | orchestrator | ++ TEMPEST=false 2025-06-03 16:03:28.666241 | orchestrator | ++ export IS_ZUUL=true 2025-06-03 16:03:28.666252 | orchestrator | ++ IS_ZUUL=true 2025-06-03 16:03:28.666263 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 16:03:28.666274 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.73 2025-06-03 16:03:28.666285 | orchestrator | ++ export EXTERNAL_API=false 2025-06-03 16:03:28.666295 | orchestrator | ++ EXTERNAL_API=false 2025-06-03 16:03:28.666306 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-03 16:03:28.666316 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-03 16:03:28.666327 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-03 16:03:28.666338 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-03 16:03:28.666349 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-03 16:03:28.666359 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-03 16:03:28.666370 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-03 16:03:28.666381 | orchestrator | + source /etc/os-release 2025-06-03 16:03:28.666391 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-03 16:03:28.666402 | orchestrator | ++ NAME=Ubuntu 2025-06-03 16:03:28.666413 | orchestrator | ++ VERSION_ID=24.04 2025-06-03 16:03:28.666424 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-03 16:03:28.666434 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-03 16:03:28.666446 | orchestrator | ++ ID=ubuntu 2025-06-03 16:03:28.666460 | orchestrator | ++ ID_LIKE=debian 2025-06-03 16:03:28.666485 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-03 16:03:28.666499 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-03 16:03:28.666512 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-03 16:03:28.666525 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-03 16:03:28.666540 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-03 16:03:28.666552 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-03 16:03:28.666565 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-03 16:03:28.666579 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-03 16:03:28.666618 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-03 16:03:28.697384 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-03 16:03:50.688198 | orchestrator | 2025-06-03 16:03:50.688334 | orchestrator | # Status of Elasticsearch 2025-06-03 16:03:50.688363 | orchestrator | 2025-06-03 16:03:50.688382 | orchestrator | + pushd /opt/configuration/contrib 2025-06-03 16:03:50.688403 | orchestrator | + echo 2025-06-03 16:03:50.688422 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-03 16:03:50.688439 | orchestrator | + echo 2025-06-03 16:03:50.688459 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-03 16:03:50.882328 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-03 16:03:50.882463 | orchestrator | 2025-06-03 16:03:50.882493 | orchestrator | # Status of MariaDB 2025-06-03 16:03:50.882516 | orchestrator | 2025-06-03 16:03:50.882536 | orchestrator | + echo 2025-06-03 16:03:50.882555 | orchestrator | + echo '# Status of MariaDB' 2025-06-03 16:03:50.882569 | orchestrator | + echo 2025-06-03 16:03:50.882581 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-03 16:03:50.882601 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-03 16:03:50.949846 | orchestrator | Reading package lists... 2025-06-03 16:03:51.267661 | orchestrator | Building dependency tree... 2025-06-03 16:03:51.268263 | orchestrator | Reading state information... 2025-06-03 16:03:51.643621 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-03 16:03:51.643720 | orchestrator | bc set to manually installed. 2025-06-03 16:03:51.643733 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-03 16:03:52.297449 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-03 16:03:52.297549 | orchestrator | 2025-06-03 16:03:52.297566 | orchestrator | # Status of Prometheus 2025-06-03 16:03:52.297579 | orchestrator | 2025-06-03 16:03:52.297590 | orchestrator | + echo 2025-06-03 16:03:52.297602 | orchestrator | + echo '# Status of Prometheus' 2025-06-03 16:03:52.297614 | orchestrator | + echo 2025-06-03 16:03:52.297625 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-03 16:03:52.366525 | orchestrator | Unauthorized 2025-06-03 16:03:52.370199 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-03 16:03:52.434976 | orchestrator | Unauthorized 2025-06-03 16:03:52.438599 | orchestrator | 2025-06-03 16:03:52.438701 | orchestrator | # Status of RabbitMQ 2025-06-03 16:03:52.438716 | orchestrator | 2025-06-03 16:03:52.438729 | orchestrator | + echo 2025-06-03 16:03:52.438741 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-03 16:03:52.438752 | orchestrator | + echo 2025-06-03 16:03:52.438765 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-03 16:03:52.864219 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-03 16:03:52.873679 | orchestrator | 2025-06-03 16:03:52.873766 | orchestrator | # Status of Redis 2025-06-03 16:03:52.873780 | orchestrator | 2025-06-03 16:03:52.873792 | orchestrator | + echo 2025-06-03 16:03:52.873804 | orchestrator | + echo '# Status of Redis' 2025-06-03 16:03:52.873816 | orchestrator | + echo 2025-06-03 16:03:52.873828 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-03 16:03:52.879546 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.002600s;;;0.000000;10.000000 2025-06-03 16:03:52.879641 | orchestrator | 2025-06-03 16:03:52.879670 | orchestrator | # Create backup of MariaDB database 2025-06-03 16:03:52.879690 | orchestrator | 2025-06-03 16:03:52.879708 | orchestrator | + popd 2025-06-03 16:03:52.879727 | orchestrator | + echo 2025-06-03 16:03:52.879743 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-03 16:03:52.879760 | orchestrator | + echo 2025-06-03 16:03:52.879779 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-03 16:03:54.648740 | orchestrator | 2025-06-03 16:03:54 | INFO  | Task 574ff419-eb9c-4047-83d1-c1f6ddfab0c4 (mariadb_backup) was prepared for execution. 2025-06-03 16:03:54.649170 | orchestrator | 2025-06-03 16:03:54 | INFO  | It takes a moment until task 574ff419-eb9c-4047-83d1-c1f6ddfab0c4 (mariadb_backup) has been started and output is visible here. 2025-06-03 16:03:58.621850 | orchestrator | 2025-06-03 16:03:58.624799 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 16:03:58.624934 | orchestrator | 2025-06-03 16:03:58.625675 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 16:03:58.628123 | orchestrator | Tuesday 03 June 2025 16:03:58 +0000 (0:00:00.183) 0:00:00.183 ********** 2025-06-03 16:03:58.807980 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:03:58.935411 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:03:58.936103 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:03:58.936157 | orchestrator | 2025-06-03 16:03:58.936189 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 16:03:58.936626 | orchestrator | Tuesday 03 June 2025 16:03:58 +0000 (0:00:00.318) 0:00:00.501 ********** 2025-06-03 16:03:59.497322 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-03 16:03:59.500106 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-03 16:03:59.501414 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-03 16:03:59.501458 | orchestrator | 2025-06-03 16:03:59.502202 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-03 16:03:59.503074 | orchestrator | 2025-06-03 16:03:59.503927 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-03 16:03:59.504632 | orchestrator | Tuesday 03 June 2025 16:03:59 +0000 (0:00:00.560) 0:00:01.061 ********** 2025-06-03 16:03:59.884484 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 16:03:59.887581 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-03 16:03:59.887651 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-03 16:03:59.889733 | orchestrator | 2025-06-03 16:03:59.890581 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-03 16:03:59.891223 | orchestrator | Tuesday 03 June 2025 16:03:59 +0000 (0:00:00.386) 0:00:01.448 ********** 2025-06-03 16:04:00.446417 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 16:04:00.451423 | orchestrator | 2025-06-03 16:04:00.451511 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-03 16:04:00.452245 | orchestrator | Tuesday 03 June 2025 16:04:00 +0000 (0:00:00.562) 0:00:02.010 ********** 2025-06-03 16:04:03.503020 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:03.503252 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:03.503273 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:03.503284 | orchestrator | 2025-06-03 16:04:03.504034 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-03 16:04:03.506185 | orchestrator | Tuesday 03 June 2025 16:04:03 +0000 (0:00:03.050) 0:00:05.060 ********** 2025-06-03 16:06:07.634563 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-03 16:06:07.634681 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-03 16:06:07.636838 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-03 16:06:07.638844 | orchestrator | mariadb_bootstrap_restart 2025-06-03 16:06:07.712178 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:06:07.713705 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:06:07.718013 | orchestrator | changed: [testbed-node-0] 2025-06-03 16:06:07.720131 | orchestrator | 2025-06-03 16:06:07.722454 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-03 16:06:07.722510 | orchestrator | skipping: no hosts matched 2025-06-03 16:06:07.727174 | orchestrator | 2025-06-03 16:06:07.728613 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-03 16:06:07.729781 | orchestrator | skipping: no hosts matched 2025-06-03 16:06:07.730755 | orchestrator | 2025-06-03 16:06:07.732708 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-03 16:06:07.733351 | orchestrator | skipping: no hosts matched 2025-06-03 16:06:07.734219 | orchestrator | 2025-06-03 16:06:07.736591 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-03 16:06:07.737395 | orchestrator | 2025-06-03 16:06:07.737999 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-03 16:06:07.738996 | orchestrator | Tuesday 03 June 2025 16:06:07 +0000 (0:02:04.215) 0:02:09.276 ********** 2025-06-03 16:06:07.893060 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:06:08.025232 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:06:08.026324 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:06:08.026892 | orchestrator | 2025-06-03 16:06:08.027734 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-03 16:06:08.028173 | orchestrator | Tuesday 03 June 2025 16:06:08 +0000 (0:00:00.315) 0:02:09.591 ********** 2025-06-03 16:06:08.400544 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:06:08.449855 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:06:08.451050 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:06:08.452604 | orchestrator | 2025-06-03 16:06:08.453806 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:06:08.454455 | orchestrator | 2025-06-03 16:06:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 16:06:08.454995 | orchestrator | 2025-06-03 16:06:08 | INFO  | Please wait and do not abort execution. 2025-06-03 16:06:08.456238 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:06:08.457047 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 16:06:08.458116 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 16:06:08.459349 | orchestrator | 2025-06-03 16:06:08.459652 | orchestrator | 2025-06-03 16:06:08.460981 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:06:08.461653 | orchestrator | Tuesday 03 June 2025 16:06:08 +0000 (0:00:00.420) 0:02:10.011 ********** 2025-06-03 16:06:08.462621 | orchestrator | =============================================================================== 2025-06-03 16:06:08.463365 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 124.22s 2025-06-03 16:06:08.464462 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.05s 2025-06-03 16:06:08.464691 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2025-06-03 16:06:08.466234 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-06-03 16:06:08.466959 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.42s 2025-06-03 16:06:08.467677 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-06-03 16:06:08.468379 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-06-03 16:06:08.469444 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2025-06-03 16:06:09.026673 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-03 16:06:09.037320 | orchestrator | + set -e 2025-06-03 16:06:09.038263 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 16:06:09.038319 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 16:06:09.038330 | orchestrator | ++ INTERACTIVE=false 2025-06-03 16:06:09.038337 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 16:06:09.038344 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 16:06:09.038352 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-03 16:06:09.038934 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-03 16:06:09.048402 | orchestrator | 2025-06-03 16:06:09.048465 | orchestrator | # OpenStack endpoints 2025-06-03 16:06:09.048475 | orchestrator | 2025-06-03 16:06:09.048487 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-03 16:06:09.048496 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-03 16:06:09.048505 | orchestrator | + export OS_CLOUD=admin 2025-06-03 16:06:09.048512 | orchestrator | + OS_CLOUD=admin 2025-06-03 16:06:09.048520 | orchestrator | + echo 2025-06-03 16:06:09.048527 | orchestrator | + echo '# OpenStack endpoints' 2025-06-03 16:06:09.048534 | orchestrator | + echo 2025-06-03 16:06:09.048541 | orchestrator | + openstack endpoint list 2025-06-03 16:06:12.608142 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-03 16:06:12.608226 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-03 16:06:12.608236 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-03 16:06:12.608243 | orchestrator | | 0a97dbeb17c44f8cb5ad057507087049 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-03 16:06:12.608250 | orchestrator | | 190a75b932334a138f0bd943ce0284e8 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-03 16:06:12.608256 | orchestrator | | 40978abf345b4afea6a9246ef50a2b6c | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-03 16:06:12.608262 | orchestrator | | 4b27c8bd3cd4428aa428911bd6ce2a3e | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-03 16:06:12.608282 | orchestrator | | 5d7a1fd28fde49938e89bcdd6be636e4 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-03 16:06:12.608289 | orchestrator | | 733bb0bc868e4bab968223f4bc857820 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-03 16:06:12.608295 | orchestrator | | 7b9bbc2c5cb74735847791a35f8c58a6 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-03 16:06:12.608300 | orchestrator | | 7e1a7a15ffae4467a9422a5b3a946079 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-03 16:06:12.608306 | orchestrator | | 85ca987e61824714ac6d3df1c034ffd9 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-03 16:06:12.608312 | orchestrator | | 8d2a75b27e2b412ea4cc96ca974328cb | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-03 16:06:12.608319 | orchestrator | | 8fdff198f53747308b72a912bf4bf31f | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-03 16:06:12.608324 | orchestrator | | 9531fa112e694c94bed386fd220f7242 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-03 16:06:12.608330 | orchestrator | | a338af2414a84eaeb6eedec6030a5aa0 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-03 16:06:12.608335 | orchestrator | | b5c9a1c67eac4b69946c6202981225da | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-03 16:06:12.608341 | orchestrator | | bff0fb29642e4886b716c1b8164416b4 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-03 16:06:12.608366 | orchestrator | | c7bfd2fe0afe400db56ed9a4c9573d8b | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-03 16:06:12.608372 | orchestrator | | cbd73da27d4d4d389237e691deead3fa | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-03 16:06:12.608378 | orchestrator | | cc6afe79bf54459d9805ff73579db5d8 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-03 16:06:12.608384 | orchestrator | | dac501a1e54244989a9e68865f188a16 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-03 16:06:12.608390 | orchestrator | | db54d924bf5a4a2ea01cf010489f691f | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-03 16:06:12.608409 | orchestrator | | ef3ac560a1814d249721b573c429b9c8 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-03 16:06:12.608416 | orchestrator | | f98299564b024ef380763bdd22f39971 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-03 16:06:12.608422 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-03 16:06:12.882662 | orchestrator | 2025-06-03 16:06:12.882755 | orchestrator | # Cinder 2025-06-03 16:06:12.882767 | orchestrator | 2025-06-03 16:06:12.882777 | orchestrator | + echo 2025-06-03 16:06:12.882785 | orchestrator | + echo '# Cinder' 2025-06-03 16:06:12.882794 | orchestrator | + echo 2025-06-03 16:06:12.882802 | orchestrator | + openstack volume service list 2025-06-03 16:06:16.239560 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:16.239672 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-03 16:06:16.239687 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:16.239696 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-03T16:06:15.000000 | 2025-06-03 16:06:16.239707 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-03T16:06:07.000000 | 2025-06-03 16:06:16.239717 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-03T16:06:08.000000 | 2025-06-03 16:06:16.239746 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-03T16:06:12.000000 | 2025-06-03 16:06:16.239757 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-03T16:06:12.000000 | 2025-06-03 16:06:16.239766 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-03T16:06:12.000000 | 2025-06-03 16:06:16.239775 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-03T16:06:08.000000 | 2025-06-03 16:06:16.239784 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-03T16:06:08.000000 | 2025-06-03 16:06:16.239794 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-03T16:06:09.000000 | 2025-06-03 16:06:16.239804 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:16.493265 | orchestrator | 2025-06-03 16:06:16.493358 | orchestrator | # Neutron 2025-06-03 16:06:16.493371 | orchestrator | 2025-06-03 16:06:16.493381 | orchestrator | + echo 2025-06-03 16:06:16.493390 | orchestrator | + echo '# Neutron' 2025-06-03 16:06:16.493400 | orchestrator | + echo 2025-06-03 16:06:16.493433 | orchestrator | + openstack network agent list 2025-06-03 16:06:19.315467 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-03 16:06:19.315592 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-03 16:06:19.315613 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-03 16:06:19.315631 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-03 16:06:19.315649 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-03 16:06:19.315666 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-03 16:06:19.315683 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-03 16:06:19.315701 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-03 16:06:19.315719 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-03 16:06:19.315738 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-03 16:06:19.315756 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-03 16:06:19.315774 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-03 16:06:19.315793 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-03 16:06:19.589277 | orchestrator | + openstack network service provider list 2025-06-03 16:06:22.147602 | orchestrator | +---------------+------+---------+ 2025-06-03 16:06:22.147686 | orchestrator | | Service Type | Name | Default | 2025-06-03 16:06:22.147692 | orchestrator | +---------------+------+---------+ 2025-06-03 16:06:22.147696 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-03 16:06:22.147700 | orchestrator | +---------------+------+---------+ 2025-06-03 16:06:22.400644 | orchestrator | 2025-06-03 16:06:22.400754 | orchestrator | # Nova 2025-06-03 16:06:22.400769 | orchestrator | 2025-06-03 16:06:22.400781 | orchestrator | + echo 2025-06-03 16:06:22.400792 | orchestrator | + echo '# Nova' 2025-06-03 16:06:22.400803 | orchestrator | + echo 2025-06-03 16:06:22.400813 | orchestrator | + openstack compute service list 2025-06-03 16:06:25.724010 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:25.724127 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-03 16:06:25.724135 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:25.724140 | orchestrator | | 19fbb82e-8702-40e1-b65b-46f8edab7b41 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-03T16:06:18.000000 | 2025-06-03 16:06:25.724144 | orchestrator | | 3777d9ba-7ba6-4698-ab35-185cf6d29889 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-03T16:06:19.000000 | 2025-06-03 16:06:25.724149 | orchestrator | | b3db42f8-73e7-4eae-8db4-08cd9ee9d20b | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-03T16:06:19.000000 | 2025-06-03 16:06:25.724153 | orchestrator | | eb8e2ed3-920a-4c13-82f2-87e552ffa795 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-03T16:06:18.000000 | 2025-06-03 16:06:25.724176 | orchestrator | | 849ea61a-f03d-4180-af06-11cf68f23359 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-03T16:06:21.000000 | 2025-06-03 16:06:25.724192 | orchestrator | | b29d7d40-6263-4a31-bbb8-8ac1da6f22ae | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-03T16:06:21.000000 | 2025-06-03 16:06:25.724196 | orchestrator | | b678be7f-b1db-46b8-a576-9f7155dfa7ce | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-03T16:06:24.000000 | 2025-06-03 16:06:25.724201 | orchestrator | | aa536eab-4674-4fea-bd0d-efcd5e4cb958 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-03T16:06:15.000000 | 2025-06-03 16:06:25.724205 | orchestrator | | 813ddfa2-bb9d-47d1-b723-9308fa33a97e | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-03T16:06:16.000000 | 2025-06-03 16:06:25.724209 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:25.980550 | orchestrator | + openstack hypervisor list 2025-06-03 16:06:30.271144 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-03 16:06:30.271251 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-03 16:06:30.271266 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-03 16:06:30.271278 | orchestrator | | fc38c4f5-38f8-45d1-be95-e513c491b0f1 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-03 16:06:30.271289 | orchestrator | | a898a8d9-f4d3-4ecb-90fa-a067d94d860b | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-03 16:06:30.271300 | orchestrator | | 11f05fab-cc5d-49eb-9d14-d8dc41da2a21 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-03 16:06:30.271311 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-03 16:06:30.577661 | orchestrator | 2025-06-03 16:06:30.577773 | orchestrator | # Run OpenStack test play 2025-06-03 16:06:30.577790 | orchestrator | 2025-06-03 16:06:30.577803 | orchestrator | + echo 2025-06-03 16:06:30.577815 | orchestrator | + echo '# Run OpenStack test play' 2025-06-03 16:06:30.577827 | orchestrator | + echo 2025-06-03 16:06:30.577839 | orchestrator | + osism apply --environment openstack test 2025-06-03 16:06:32.284920 | orchestrator | 2025-06-03 16:06:32 | INFO  | Trying to run play test in environment openstack 2025-06-03 16:06:32.289847 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:06:32.290000 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:06:32.290068 | orchestrator | Registering Redlock._release_script 2025-06-03 16:06:32.349368 | orchestrator | 2025-06-03 16:06:32 | INFO  | Task 2093256d-a55f-4a46-b459-7cac998a9469 (test) was prepared for execution. 2025-06-03 16:06:32.349482 | orchestrator | 2025-06-03 16:06:32 | INFO  | It takes a moment until task 2093256d-a55f-4a46-b459-7cac998a9469 (test) has been started and output is visible here. 2025-06-03 16:06:36.421720 | orchestrator | 2025-06-03 16:06:36.423835 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-03 16:06:36.423879 | orchestrator | 2025-06-03 16:06:36.424924 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-03 16:06:36.426838 | orchestrator | Tuesday 03 June 2025 16:06:36 +0000 (0:00:00.078) 0:00:00.078 ********** 2025-06-03 16:06:40.175402 | orchestrator | changed: [localhost] 2025-06-03 16:06:40.175746 | orchestrator | 2025-06-03 16:06:40.177390 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-03 16:06:40.178350 | orchestrator | Tuesday 03 June 2025 16:06:40 +0000 (0:00:03.754) 0:00:03.833 ********** 2025-06-03 16:06:44.443788 | orchestrator | changed: [localhost] 2025-06-03 16:06:44.444309 | orchestrator | 2025-06-03 16:06:44.444941 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-03 16:06:44.446847 | orchestrator | Tuesday 03 June 2025 16:06:44 +0000 (0:00:04.268) 0:00:08.101 ********** 2025-06-03 16:06:50.788349 | orchestrator | changed: [localhost] 2025-06-03 16:06:50.788445 | orchestrator | 2025-06-03 16:06:50.790528 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-03 16:06:50.793293 | orchestrator | Tuesday 03 June 2025 16:06:50 +0000 (0:00:06.341) 0:00:14.443 ********** 2025-06-03 16:06:54.828587 | orchestrator | changed: [localhost] 2025-06-03 16:06:54.828896 | orchestrator | 2025-06-03 16:06:54.829757 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-03 16:06:54.831243 | orchestrator | Tuesday 03 June 2025 16:06:54 +0000 (0:00:04.042) 0:00:18.486 ********** 2025-06-03 16:06:59.065043 | orchestrator | changed: [localhost] 2025-06-03 16:06:59.067520 | orchestrator | 2025-06-03 16:06:59.071874 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-03 16:06:59.075918 | orchestrator | Tuesday 03 June 2025 16:06:59 +0000 (0:00:04.236) 0:00:22.722 ********** 2025-06-03 16:07:11.206739 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-03 16:07:11.206844 | orchestrator | changed: [localhost] => (item=member) 2025-06-03 16:07:11.206857 | orchestrator | changed: [localhost] => (item=creator) 2025-06-03 16:07:11.206868 | orchestrator | 2025-06-03 16:07:11.207214 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-03 16:07:11.207306 | orchestrator | Tuesday 03 June 2025 16:07:11 +0000 (0:00:12.141) 0:00:34.864 ********** 2025-06-03 16:07:15.883977 | orchestrator | changed: [localhost] 2025-06-03 16:07:15.884113 | orchestrator | 2025-06-03 16:07:15.887133 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-03 16:07:15.887258 | orchestrator | Tuesday 03 June 2025 16:07:15 +0000 (0:00:04.678) 0:00:39.543 ********** 2025-06-03 16:07:20.771458 | orchestrator | changed: [localhost] 2025-06-03 16:07:20.772666 | orchestrator | 2025-06-03 16:07:20.773263 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-03 16:07:20.774719 | orchestrator | Tuesday 03 June 2025 16:07:20 +0000 (0:00:04.885) 0:00:44.428 ********** 2025-06-03 16:07:25.078615 | orchestrator | changed: [localhost] 2025-06-03 16:07:25.078709 | orchestrator | 2025-06-03 16:07:25.078985 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-03 16:07:25.079772 | orchestrator | Tuesday 03 June 2025 16:07:25 +0000 (0:00:04.307) 0:00:48.735 ********** 2025-06-03 16:07:28.920423 | orchestrator | changed: [localhost] 2025-06-03 16:07:28.920529 | orchestrator | 2025-06-03 16:07:28.920662 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-03 16:07:28.921071 | orchestrator | Tuesday 03 June 2025 16:07:28 +0000 (0:00:03.842) 0:00:52.577 ********** 2025-06-03 16:07:33.069029 | orchestrator | changed: [localhost] 2025-06-03 16:07:33.069600 | orchestrator | 2025-06-03 16:07:33.071666 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-03 16:07:33.072610 | orchestrator | Tuesday 03 June 2025 16:07:33 +0000 (0:00:04.146) 0:00:56.724 ********** 2025-06-03 16:07:36.922843 | orchestrator | changed: [localhost] 2025-06-03 16:07:36.922954 | orchestrator | 2025-06-03 16:07:36.923248 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-03 16:07:36.924133 | orchestrator | Tuesday 03 June 2025 16:07:36 +0000 (0:00:03.856) 0:01:00.580 ********** 2025-06-03 16:07:53.000011 | orchestrator | changed: [localhost] 2025-06-03 16:07:53.000151 | orchestrator | 2025-06-03 16:07:53.000221 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-03 16:07:53.000238 | orchestrator | Tuesday 03 June 2025 16:07:52 +0000 (0:00:16.071) 0:01:16.652 ********** 2025-06-03 16:10:06.288885 | orchestrator | changed: [localhost] => (item=test) 2025-06-03 16:10:06.289044 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-03 16:10:06.289913 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-03 16:10:06.292676 | orchestrator | 2025-06-03 16:10:06.294442 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-03 16:10:36.289294 | orchestrator | 2025-06-03 16:10:36.289455 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-03 16:11:06.292123 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-03 16:11:06.292222 | orchestrator | 2025-06-03 16:11:06.292234 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-03 16:11:12.641866 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-03 16:11:12.642534 | orchestrator | 2025-06-03 16:11:12.643073 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-03 16:11:12.644129 | orchestrator | Tuesday 03 June 2025 16:11:12 +0000 (0:03:19.646) 0:04:36.298 ********** 2025-06-03 16:11:37.318656 | orchestrator | changed: [localhost] => (item=test) 2025-06-03 16:11:37.318936 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-03 16:11:37.318972 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-03 16:11:37.319011 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-03 16:11:37.320406 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-03 16:11:37.320758 | orchestrator | 2025-06-03 16:11:37.321369 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-03 16:11:37.321969 | orchestrator | Tuesday 03 June 2025 16:11:37 +0000 (0:00:24.676) 0:05:00.974 ********** 2025-06-03 16:12:09.785741 | orchestrator | changed: [localhost] => (item=test) 2025-06-03 16:12:09.785899 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-03 16:12:09.785918 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-03 16:12:09.785930 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-03 16:12:09.785941 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-03 16:12:09.785953 | orchestrator | 2025-06-03 16:12:09.785966 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-03 16:12:09.785979 | orchestrator | Tuesday 03 June 2025 16:12:09 +0000 (0:00:32.460) 0:05:33.435 ********** 2025-06-03 16:12:17.326176 | orchestrator | changed: [localhost] 2025-06-03 16:12:17.326263 | orchestrator | 2025-06-03 16:12:17.326907 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-03 16:12:17.327739 | orchestrator | Tuesday 03 June 2025 16:12:17 +0000 (0:00:07.547) 0:05:40.982 ********** 2025-06-03 16:12:30.978176 | orchestrator | changed: [localhost] 2025-06-03 16:12:30.978259 | orchestrator | 2025-06-03 16:12:30.978268 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-03 16:12:30.979575 | orchestrator | Tuesday 03 June 2025 16:12:30 +0000 (0:00:13.649) 0:05:54.631 ********** 2025-06-03 16:12:36.159663 | orchestrator | ok: [localhost] 2025-06-03 16:12:36.159809 | orchestrator | 2025-06-03 16:12:36.161053 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-03 16:12:36.162237 | orchestrator | Tuesday 03 June 2025 16:12:36 +0000 (0:00:05.185) 0:05:59.817 ********** 2025-06-03 16:12:36.201426 | orchestrator | ok: [localhost] => { 2025-06-03 16:12:36.201642 | orchestrator |  "msg": "192.168.112.179" 2025-06-03 16:12:36.203055 | orchestrator | } 2025-06-03 16:12:36.203572 | orchestrator | 2025-06-03 16:12:36.205221 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:12:36.205261 | orchestrator | 2025-06-03 16:12:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 16:12:36.205799 | orchestrator | 2025-06-03 16:12:36 | INFO  | Please wait and do not abort execution. 2025-06-03 16:12:36.206834 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 16:12:36.207752 | orchestrator | 2025-06-03 16:12:36.208736 | orchestrator | 2025-06-03 16:12:36.209502 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:12:36.210172 | orchestrator | Tuesday 03 June 2025 16:12:36 +0000 (0:00:00.042) 0:05:59.860 ********** 2025-06-03 16:12:36.210716 | orchestrator | =============================================================================== 2025-06-03 16:12:36.211318 | orchestrator | Create test instances ------------------------------------------------- 199.65s 2025-06-03 16:12:36.212011 | orchestrator | Add tag to instances --------------------------------------------------- 32.46s 2025-06-03 16:12:36.212514 | orchestrator | Add metadata to instances ---------------------------------------------- 24.68s 2025-06-03 16:12:36.213823 | orchestrator | Create test network topology ------------------------------------------- 16.07s 2025-06-03 16:12:36.214672 | orchestrator | Attach test volume ----------------------------------------------------- 13.65s 2025-06-03 16:12:36.215336 | orchestrator | Add member roles to user test ------------------------------------------ 12.14s 2025-06-03 16:12:36.215928 | orchestrator | Create test volume ------------------------------------------------------ 7.55s 2025-06-03 16:12:36.216508 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.34s 2025-06-03 16:12:36.216996 | orchestrator | Create floating ip address ---------------------------------------------- 5.19s 2025-06-03 16:12:36.217593 | orchestrator | Create ssh security group ----------------------------------------------- 4.89s 2025-06-03 16:12:36.218041 | orchestrator | Create test server group ------------------------------------------------ 4.68s 2025-06-03 16:12:36.218677 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.31s 2025-06-03 16:12:36.219401 | orchestrator | Create test-admin user -------------------------------------------------- 4.27s 2025-06-03 16:12:36.219738 | orchestrator | Create test user -------------------------------------------------------- 4.24s 2025-06-03 16:12:36.220247 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.15s 2025-06-03 16:12:36.220826 | orchestrator | Create test project ----------------------------------------------------- 4.04s 2025-06-03 16:12:36.221239 | orchestrator | Create test keypair ----------------------------------------------------- 3.86s 2025-06-03 16:12:36.221781 | orchestrator | Create icmp security group ---------------------------------------------- 3.84s 2025-06-03 16:12:36.222232 | orchestrator | Create test domain ------------------------------------------------------ 3.75s 2025-06-03 16:12:36.222711 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-06-03 16:12:36.694714 | orchestrator | + server_list 2025-06-03 16:12:36.694817 | orchestrator | + openstack --os-cloud test server list 2025-06-03 16:12:40.866980 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-03 16:12:40.867088 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-03 16:12:40.867103 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-03 16:12:40.867114 | orchestrator | | 40eb29ca-e04c-466e-8a6f-cc9643e34ef0 | test-4 | ACTIVE | auto_allocated_network=10.42.0.9, 192.168.112.184 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-03 16:12:40.867125 | orchestrator | | f3dc5b02-8d98-45d7-8869-8ed68f820ee7 | test-3 | ACTIVE | auto_allocated_network=10.42.0.53, 192.168.112.176 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-03 16:12:40.867138 | orchestrator | | 1826001e-9148-4494-b624-a7b44e39ff08 | test-2 | ACTIVE | auto_allocated_network=10.42.0.47, 192.168.112.153 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-03 16:12:40.867158 | orchestrator | | 706adfbc-c4ea-40ed-a429-cb8a1e214a7f | test-1 | ACTIVE | auto_allocated_network=10.42.0.31, 192.168.112.125 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-03 16:12:40.867175 | orchestrator | | a4f6ae44-e17f-4a11-87d9-f253ed26d3c9 | test | ACTIVE | auto_allocated_network=10.42.0.49, 192.168.112.179 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-03 16:12:40.867192 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-03 16:12:41.148224 | orchestrator | + openstack --os-cloud test server show test 2025-06-03 16:12:44.645693 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:44.645786 | orchestrator | | Field | Value | 2025-06-03 16:12:44.645793 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:44.645798 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-03 16:12:44.645805 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-03 16:12:44.645809 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-03 16:12:44.645814 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-03 16:12:44.645818 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-03 16:12:44.645822 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-03 16:12:44.645827 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-03 16:12:44.645834 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-03 16:12:44.645858 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-03 16:12:44.645872 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-03 16:12:44.645879 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-03 16:12:44.645886 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-03 16:12:44.645896 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-03 16:12:44.645903 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-03 16:12:44.645909 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-03 16:12:44.645916 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-03T16:08:24.000000 | 2025-06-03 16:12:44.645922 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-03 16:12:44.645929 | orchestrator | | accessIPv4 | | 2025-06-03 16:12:44.645936 | orchestrator | | accessIPv6 | | 2025-06-03 16:12:44.645942 | orchestrator | | addresses | auto_allocated_network=10.42.0.49, 192.168.112.179 | 2025-06-03 16:12:44.645958 | orchestrator | | config_drive | | 2025-06-03 16:12:44.645966 | orchestrator | | created | 2025-06-03T16:08:01Z | 2025-06-03 16:12:44.645973 | orchestrator | | description | None | 2025-06-03 16:12:44.645979 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-03 16:12:44.645989 | orchestrator | | hostId | 1b4f835732178934a0d4c7927c891648b51552b1bf69c86227622e33 | 2025-06-03 16:12:44.645996 | orchestrator | | host_status | None | 2025-06-03 16:12:44.646003 | orchestrator | | id | a4f6ae44-e17f-4a11-87d9-f253ed26d3c9 | 2025-06-03 16:12:44.646010 | orchestrator | | image | Cirros 0.6.2 (2dbebb00-6af4-41d8-9c2a-5572292731f5) | 2025-06-03 16:12:44.646069 | orchestrator | | key_name | test | 2025-06-03 16:12:44.646076 | orchestrator | | locked | False | 2025-06-03 16:12:44.646081 | orchestrator | | locked_reason | None | 2025-06-03 16:12:44.646093 | orchestrator | | name | test | 2025-06-03 16:12:44.646103 | orchestrator | | pinned_availability_zone | None | 2025-06-03 16:12:44.646110 | orchestrator | | progress | 0 | 2025-06-03 16:12:44.646116 | orchestrator | | project_id | d6661f05bdd241d7ab06b3b43639fe30 | 2025-06-03 16:12:44.646122 | orchestrator | | properties | hostname='test' | 2025-06-03 16:12:44.646140 | orchestrator | | security_groups | name='ssh' | 2025-06-03 16:12:44.646147 | orchestrator | | | name='icmp' | 2025-06-03 16:12:44.646154 | orchestrator | | server_groups | None | 2025-06-03 16:12:44.646160 | orchestrator | | status | ACTIVE | 2025-06-03 16:12:44.646174 | orchestrator | | tags | test | 2025-06-03 16:12:44.646188 | orchestrator | | trusted_image_certificates | None | 2025-06-03 16:12:44.646194 | orchestrator | | updated | 2025-06-03T16:11:17Z | 2025-06-03 16:12:44.646204 | orchestrator | | user_id | 26f6a05d0a5043b18dbaa8132c6880ec | 2025-06-03 16:12:44.646211 | orchestrator | | volumes_attached | delete_on_termination='False', id='d041a3dc-7040-4668-89b9-d7ffd3f9b7ec' | 2025-06-03 16:12:44.650217 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:44.914647 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-03 16:12:48.213873 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:48.213989 | orchestrator | | Field | Value | 2025-06-03 16:12:48.214005 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:48.214073 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-03 16:12:48.214087 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-03 16:12:48.214098 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-03 16:12:48.214132 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-03 16:12:48.214143 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-03 16:12:48.214154 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-03 16:12:48.214165 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-03 16:12:48.214203 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-03 16:12:48.214237 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-03 16:12:48.214258 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-03 16:12:48.214286 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-03 16:12:48.214307 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-03 16:12:48.214325 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-03 16:12:48.214345 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-03 16:12:48.214377 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-03 16:12:48.214397 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-03T16:09:08.000000 | 2025-06-03 16:12:48.214429 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-03 16:12:48.214449 | orchestrator | | accessIPv4 | | 2025-06-03 16:12:48.214467 | orchestrator | | accessIPv6 | | 2025-06-03 16:12:48.214515 | orchestrator | | addresses | auto_allocated_network=10.42.0.31, 192.168.112.125 | 2025-06-03 16:12:48.214548 | orchestrator | | config_drive | | 2025-06-03 16:12:48.214568 | orchestrator | | created | 2025-06-03T16:08:46Z | 2025-06-03 16:12:48.214596 | orchestrator | | description | None | 2025-06-03 16:12:48.214613 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-03 16:12:48.214632 | orchestrator | | hostId | dc88835eb3867c0c578115376d79cdafef1b8faa9aa24b99f3b5e4f6 | 2025-06-03 16:12:48.214669 | orchestrator | | host_status | None | 2025-06-03 16:12:48.214688 | orchestrator | | id | 706adfbc-c4ea-40ed-a429-cb8a1e214a7f | 2025-06-03 16:12:48.214707 | orchestrator | | image | Cirros 0.6.2 (2dbebb00-6af4-41d8-9c2a-5572292731f5) | 2025-06-03 16:12:48.214726 | orchestrator | | key_name | test | 2025-06-03 16:12:48.214745 | orchestrator | | locked | False | 2025-06-03 16:12:48.214757 | orchestrator | | locked_reason | None | 2025-06-03 16:12:48.214769 | orchestrator | | name | test-1 | 2025-06-03 16:12:48.214788 | orchestrator | | pinned_availability_zone | None | 2025-06-03 16:12:48.214805 | orchestrator | | progress | 0 | 2025-06-03 16:12:48.214816 | orchestrator | | project_id | d6661f05bdd241d7ab06b3b43639fe30 | 2025-06-03 16:12:48.214841 | orchestrator | | properties | hostname='test-1' | 2025-06-03 16:12:48.214853 | orchestrator | | security_groups | name='ssh' | 2025-06-03 16:12:48.214864 | orchestrator | | | name='icmp' | 2025-06-03 16:12:48.214874 | orchestrator | | server_groups | None | 2025-06-03 16:12:48.214885 | orchestrator | | status | ACTIVE | 2025-06-03 16:12:48.214896 | orchestrator | | tags | test | 2025-06-03 16:12:48.214907 | orchestrator | | trusted_image_certificates | None | 2025-06-03 16:12:48.214918 | orchestrator | | updated | 2025-06-03T16:11:22Z | 2025-06-03 16:12:48.214934 | orchestrator | | user_id | 26f6a05d0a5043b18dbaa8132c6880ec | 2025-06-03 16:12:48.214946 | orchestrator | | volumes_attached | | 2025-06-03 16:12:48.218437 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:48.466096 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-03 16:12:51.664375 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:51.664469 | orchestrator | | Field | Value | 2025-06-03 16:12:51.664528 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:51.664539 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-03 16:12:51.664621 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-03 16:12:51.664632 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-03 16:12:51.664641 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-03 16:12:51.664649 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-03 16:12:51.664657 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-03 16:12:51.664665 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-03 16:12:51.664673 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-03 16:12:51.664719 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-03 16:12:51.664729 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-03 16:12:51.664737 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-03 16:12:51.664745 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-03 16:12:51.664753 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-03 16:12:51.664761 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-03 16:12:51.664792 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-03 16:12:51.664801 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-03T16:09:49.000000 | 2025-06-03 16:12:51.664809 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-03 16:12:51.664818 | orchestrator | | accessIPv4 | | 2025-06-03 16:12:51.664833 | orchestrator | | accessIPv6 | | 2025-06-03 16:12:51.664845 | orchestrator | | addresses | auto_allocated_network=10.42.0.47, 192.168.112.153 | 2025-06-03 16:12:51.664858 | orchestrator | | config_drive | | 2025-06-03 16:12:51.664867 | orchestrator | | created | 2025-06-03T16:09:26Z | 2025-06-03 16:12:51.664875 | orchestrator | | description | None | 2025-06-03 16:12:51.664883 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-03 16:12:51.664891 | orchestrator | | hostId | c20725ef6bedebed3f7fe36c049733ab962fb6f75c5ec0a329d125f1 | 2025-06-03 16:12:51.664899 | orchestrator | | host_status | None | 2025-06-03 16:12:51.664907 | orchestrator | | id | 1826001e-9148-4494-b624-a7b44e39ff08 | 2025-06-03 16:12:51.664915 | orchestrator | | image | Cirros 0.6.2 (2dbebb00-6af4-41d8-9c2a-5572292731f5) | 2025-06-03 16:12:51.664925 | orchestrator | | key_name | test | 2025-06-03 16:12:51.664947 | orchestrator | | locked | False | 2025-06-03 16:12:51.664969 | orchestrator | | locked_reason | None | 2025-06-03 16:12:51.664988 | orchestrator | | name | test-2 | 2025-06-03 16:12:51.665010 | orchestrator | | pinned_availability_zone | None | 2025-06-03 16:12:51.665025 | orchestrator | | progress | 0 | 2025-06-03 16:12:51.665039 | orchestrator | | project_id | d6661f05bdd241d7ab06b3b43639fe30 | 2025-06-03 16:12:51.665054 | orchestrator | | properties | hostname='test-2' | 2025-06-03 16:12:51.665068 | orchestrator | | security_groups | name='ssh' | 2025-06-03 16:12:51.665083 | orchestrator | | | name='icmp' | 2025-06-03 16:12:51.665097 | orchestrator | | server_groups | None | 2025-06-03 16:12:51.665111 | orchestrator | | status | ACTIVE | 2025-06-03 16:12:51.665140 | orchestrator | | tags | test | 2025-06-03 16:12:51.665156 | orchestrator | | trusted_image_certificates | None | 2025-06-03 16:12:51.665170 | orchestrator | | updated | 2025-06-03T16:11:27Z | 2025-06-03 16:12:51.665196 | orchestrator | | user_id | 26f6a05d0a5043b18dbaa8132c6880ec | 2025-06-03 16:12:51.665211 | orchestrator | | volumes_attached | | 2025-06-03 16:12:51.669061 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:51.936197 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-03 16:12:55.163195 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:55.163310 | orchestrator | | Field | Value | 2025-06-03 16:12:55.163328 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:55.163341 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-03 16:12:55.163355 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-03 16:12:55.163407 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-03 16:12:55.163427 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-03 16:12:55.163617 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-03 16:12:55.163675 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-03 16:12:55.163697 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-03 16:12:55.163716 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-03 16:12:55.163757 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-03 16:12:55.163775 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-03 16:12:55.163792 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-03 16:12:55.163808 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-03 16:12:55.163843 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-03 16:12:55.163860 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-03 16:12:55.163878 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-03 16:12:55.163897 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-03T16:10:27.000000 | 2025-06-03 16:12:55.163915 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-03 16:12:55.163944 | orchestrator | | accessIPv4 | | 2025-06-03 16:12:55.163962 | orchestrator | | accessIPv6 | | 2025-06-03 16:12:55.163981 | orchestrator | | addresses | auto_allocated_network=10.42.0.53, 192.168.112.176 | 2025-06-03 16:12:55.164013 | orchestrator | | config_drive | | 2025-06-03 16:12:55.164033 | orchestrator | | created | 2025-06-03T16:10:11Z | 2025-06-03 16:12:55.164047 | orchestrator | | description | None | 2025-06-03 16:12:55.164067 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-03 16:12:55.164079 | orchestrator | | hostId | dc88835eb3867c0c578115376d79cdafef1b8faa9aa24b99f3b5e4f6 | 2025-06-03 16:12:55.164090 | orchestrator | | host_status | None | 2025-06-03 16:12:55.164100 | orchestrator | | id | f3dc5b02-8d98-45d7-8869-8ed68f820ee7 | 2025-06-03 16:12:55.164111 | orchestrator | | image | Cirros 0.6.2 (2dbebb00-6af4-41d8-9c2a-5572292731f5) | 2025-06-03 16:12:55.164122 | orchestrator | | key_name | test | 2025-06-03 16:12:55.164133 | orchestrator | | locked | False | 2025-06-03 16:12:55.164144 | orchestrator | | locked_reason | None | 2025-06-03 16:12:55.164155 | orchestrator | | name | test-3 | 2025-06-03 16:12:55.164173 | orchestrator | | pinned_availability_zone | None | 2025-06-03 16:12:55.164185 | orchestrator | | progress | 0 | 2025-06-03 16:12:55.164202 | orchestrator | | project_id | d6661f05bdd241d7ab06b3b43639fe30 | 2025-06-03 16:12:55.164213 | orchestrator | | properties | hostname='test-3' | 2025-06-03 16:12:55.164224 | orchestrator | | security_groups | name='ssh' | 2025-06-03 16:12:55.164235 | orchestrator | | | name='icmp' | 2025-06-03 16:12:55.164246 | orchestrator | | server_groups | None | 2025-06-03 16:12:55.164257 | orchestrator | | status | ACTIVE | 2025-06-03 16:12:55.164282 | orchestrator | | tags | test | 2025-06-03 16:12:55.164293 | orchestrator | | trusted_image_certificates | None | 2025-06-03 16:12:55.164304 | orchestrator | | updated | 2025-06-03T16:11:32Z | 2025-06-03 16:12:55.164321 | orchestrator | | user_id | 26f6a05d0a5043b18dbaa8132c6880ec | 2025-06-03 16:12:55.164333 | orchestrator | | volumes_attached | | 2025-06-03 16:12:55.172688 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:55.456601 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-03 16:12:58.690863 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:58.690976 | orchestrator | | Field | Value | 2025-06-03 16:12:58.690994 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:58.691006 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-03 16:12:58.691015 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-03 16:12:58.691022 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-03 16:12:58.691045 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-03 16:12:58.691053 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-03 16:12:58.691062 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-03 16:12:58.691098 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-03 16:12:58.691112 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-03 16:12:58.691143 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-03 16:12:58.691153 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-03 16:12:58.691160 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-03 16:12:58.691167 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-03 16:12:58.691174 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-03 16:12:58.691181 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-03 16:12:58.691194 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-03 16:12:58.691205 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-03T16:11:00.000000 | 2025-06-03 16:12:58.691216 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-03 16:12:58.691236 | orchestrator | | accessIPv4 | | 2025-06-03 16:12:58.691246 | orchestrator | | accessIPv6 | | 2025-06-03 16:12:58.691257 | orchestrator | | addresses | auto_allocated_network=10.42.0.9, 192.168.112.184 | 2025-06-03 16:12:58.691277 | orchestrator | | config_drive | | 2025-06-03 16:12:58.691289 | orchestrator | | created | 2025-06-03T16:10:44Z | 2025-06-03 16:12:58.691300 | orchestrator | | description | None | 2025-06-03 16:12:58.691313 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-03 16:12:58.691320 | orchestrator | | hostId | c20725ef6bedebed3f7fe36c049733ab962fb6f75c5ec0a329d125f1 | 2025-06-03 16:12:58.691326 | orchestrator | | host_status | None | 2025-06-03 16:12:58.691338 | orchestrator | | id | 40eb29ca-e04c-466e-8a6f-cc9643e34ef0 | 2025-06-03 16:12:58.691345 | orchestrator | | image | Cirros 0.6.2 (2dbebb00-6af4-41d8-9c2a-5572292731f5) | 2025-06-03 16:12:58.691356 | orchestrator | | key_name | test | 2025-06-03 16:12:58.691363 | orchestrator | | locked | False | 2025-06-03 16:12:58.691370 | orchestrator | | locked_reason | None | 2025-06-03 16:12:58.691377 | orchestrator | | name | test-4 | 2025-06-03 16:12:58.691388 | orchestrator | | pinned_availability_zone | None | 2025-06-03 16:12:58.691396 | orchestrator | | progress | 0 | 2025-06-03 16:12:58.691403 | orchestrator | | project_id | d6661f05bdd241d7ab06b3b43639fe30 | 2025-06-03 16:12:58.691410 | orchestrator | | properties | hostname='test-4' | 2025-06-03 16:12:58.691418 | orchestrator | | security_groups | name='ssh' | 2025-06-03 16:12:58.691426 | orchestrator | | | name='icmp' | 2025-06-03 16:12:58.691438 | orchestrator | | server_groups | None | 2025-06-03 16:12:58.691451 | orchestrator | | status | ACTIVE | 2025-06-03 16:12:58.691459 | orchestrator | | tags | test | 2025-06-03 16:12:58.691467 | orchestrator | | trusted_image_certificates | None | 2025-06-03 16:12:58.691475 | orchestrator | | updated | 2025-06-03T16:11:37Z | 2025-06-03 16:12:58.691510 | orchestrator | | user_id | 26f6a05d0a5043b18dbaa8132c6880ec | 2025-06-03 16:12:58.691520 | orchestrator | | volumes_attached | | 2025-06-03 16:12:58.695064 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:58.985572 | orchestrator | + server_ping 2025-06-03 16:12:58.986258 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-03 16:12:58.986299 | orchestrator | ++ tr -d '\r' 2025-06-03 16:13:01.904573 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:13:01.904676 | orchestrator | + ping -c3 192.168.112.179 2025-06-03 16:13:01.918685 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-06-03 16:13:01.918785 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=8.68 ms 2025-06-03 16:13:02.914146 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.55 ms 2025-06-03 16:13:03.915386 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.08 ms 2025-06-03 16:13:03.915475 | orchestrator | 2025-06-03 16:13:03.915535 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-06-03 16:13:03.915546 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:13:03.915553 | orchestrator | rtt min/avg/max/mdev = 2.075/4.435/8.681/3.008 ms 2025-06-03 16:13:03.916381 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:13:03.916598 | orchestrator | + ping -c3 192.168.112.153 2025-06-03 16:13:03.931293 | orchestrator | PING 192.168.112.153 (192.168.112.153) 56(84) bytes of data. 2025-06-03 16:13:03.931384 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=1 ttl=63 time=10.4 ms 2025-06-03 16:13:04.924693 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=2 ttl=63 time=2.22 ms 2025-06-03 16:13:05.928072 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=3 ttl=63 time=2.14 ms 2025-06-03 16:13:05.928166 | orchestrator | 2025-06-03 16:13:05.928177 | orchestrator | --- 192.168.112.153 ping statistics --- 2025-06-03 16:13:05.928192 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-03 16:13:05.928201 | orchestrator | rtt min/avg/max/mdev = 2.138/4.913/10.383/3.867 ms 2025-06-03 16:13:05.928226 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:13:05.928234 | orchestrator | + ping -c3 192.168.112.125 2025-06-03 16:13:05.944557 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-06-03 16:13:05.944643 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=12.3 ms 2025-06-03 16:13:06.936717 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.79 ms 2025-06-03 16:13:07.937651 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=2.01 ms 2025-06-03 16:13:07.937762 | orchestrator | 2025-06-03 16:13:07.937782 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-06-03 16:13:07.937797 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:13:07.937812 | orchestrator | rtt min/avg/max/mdev = 2.010/5.690/12.277/4.667 ms 2025-06-03 16:13:07.937953 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:13:07.937969 | orchestrator | + ping -c3 192.168.112.184 2025-06-03 16:13:07.949665 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-06-03 16:13:07.949743 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=7.29 ms 2025-06-03 16:13:08.946898 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.82 ms 2025-06-03 16:13:09.947788 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=1.53 ms 2025-06-03 16:13:09.947986 | orchestrator | 2025-06-03 16:13:09.948008 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-06-03 16:13:09.948021 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-03 16:13:09.948033 | orchestrator | rtt min/avg/max/mdev = 1.534/3.878/7.287/2.466 ms 2025-06-03 16:13:09.948464 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:13:09.948529 | orchestrator | + ping -c3 192.168.112.176 2025-06-03 16:13:09.959610 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-06-03 16:13:09.959690 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=7.13 ms 2025-06-03 16:13:10.956609 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=2.49 ms 2025-06-03 16:13:11.957395 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=1.80 ms 2025-06-03 16:13:11.957539 | orchestrator | 2025-06-03 16:13:11.957553 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-06-03 16:13:11.957561 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:13:11.957568 | orchestrator | rtt min/avg/max/mdev = 1.796/3.805/7.128/2.366 ms 2025-06-03 16:13:11.957955 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-03 16:13:12.422791 | orchestrator | ok: Runtime: 0:11:40.241871 2025-06-03 16:13:12.468011 | 2025-06-03 16:13:12.468146 | TASK [Run tempest] 2025-06-03 16:13:13.003424 | orchestrator | skipping: Conditional result was False 2025-06-03 16:13:13.025049 | 2025-06-03 16:13:13.025249 | TASK [Check prometheus alert status] 2025-06-03 16:13:13.571767 | orchestrator | skipping: Conditional result was False 2025-06-03 16:13:13.575102 | 2025-06-03 16:13:13.575210 | PLAY RECAP 2025-06-03 16:13:13.575276 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-03 16:13:13.575303 | 2025-06-03 16:13:13.793940 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-03 16:13:13.795091 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-03 16:13:14.552526 | 2025-06-03 16:13:14.552687 | PLAY [Post output play] 2025-06-03 16:13:14.573353 | 2025-06-03 16:13:14.573510 | LOOP [stage-output : Register sources] 2025-06-03 16:13:14.645453 | 2025-06-03 16:13:14.645757 | TASK [stage-output : Check sudo] 2025-06-03 16:13:15.645590 | orchestrator | sudo: a password is required 2025-06-03 16:13:15.712201 | orchestrator | ok: Runtime: 0:00:00.159770 2025-06-03 16:13:15.719858 | 2025-06-03 16:13:15.720091 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-03 16:13:15.752710 | 2025-06-03 16:13:15.752926 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-03 16:13:15.830041 | orchestrator | ok 2025-06-03 16:13:15.839153 | 2025-06-03 16:13:15.839339 | LOOP [stage-output : Ensure target folders exist] 2025-06-03 16:13:16.332737 | orchestrator | ok: "docs" 2025-06-03 16:13:16.333091 | 2025-06-03 16:13:16.596472 | orchestrator | ok: "artifacts" 2025-06-03 16:13:16.865401 | orchestrator | ok: "logs" 2025-06-03 16:13:16.884406 | 2025-06-03 16:13:16.884758 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-03 16:13:16.931220 | 2025-06-03 16:13:16.931900 | TASK [stage-output : Make all log files readable] 2025-06-03 16:13:17.255527 | orchestrator | ok 2025-06-03 16:13:17.262942 | 2025-06-03 16:13:17.263067 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-03 16:13:17.298164 | orchestrator | skipping: Conditional result was False 2025-06-03 16:13:17.311527 | 2025-06-03 16:13:17.311679 | TASK [stage-output : Discover log files for compression] 2025-06-03 16:13:17.337079 | orchestrator | skipping: Conditional result was False 2025-06-03 16:13:17.346119 | 2025-06-03 16:13:17.346243 | LOOP [stage-output : Archive everything from logs] 2025-06-03 16:13:17.405827 | 2025-06-03 16:13:17.406037 | PLAY [Post cleanup play] 2025-06-03 16:13:17.422109 | 2025-06-03 16:13:17.422289 | TASK [Set cloud fact (Zuul deployment)] 2025-06-03 16:13:17.470144 | orchestrator | ok 2025-06-03 16:13:17.482076 | 2025-06-03 16:13:17.482234 | TASK [Set cloud fact (local deployment)] 2025-06-03 16:13:17.516755 | orchestrator | skipping: Conditional result was False 2025-06-03 16:13:17.531541 | 2025-06-03 16:13:17.531734 | TASK [Clean the cloud environment] 2025-06-03 16:13:21.546345 | orchestrator | 2025-06-03 16:13:21 - clean up servers 2025-06-03 16:13:22.321965 | orchestrator | 2025-06-03 16:13:22 - testbed-manager 2025-06-03 16:13:22.412719 | orchestrator | 2025-06-03 16:13:22 - testbed-node-3 2025-06-03 16:13:22.503188 | orchestrator | 2025-06-03 16:13:22 - testbed-node-2 2025-06-03 16:13:22.611477 | orchestrator | 2025-06-03 16:13:22 - testbed-node-5 2025-06-03 16:13:22.705345 | orchestrator | 2025-06-03 16:13:22 - testbed-node-1 2025-06-03 16:13:22.798330 | orchestrator | 2025-06-03 16:13:22 - testbed-node-0 2025-06-03 16:13:22.882948 | orchestrator | 2025-06-03 16:13:22 - testbed-node-4 2025-06-03 16:13:22.965823 | orchestrator | 2025-06-03 16:13:22 - clean up keypairs 2025-06-03 16:13:22.983945 | orchestrator | 2025-06-03 16:13:22 - testbed 2025-06-03 16:13:23.008103 | orchestrator | 2025-06-03 16:13:23 - wait for servers to be gone 2025-06-03 16:13:35.960049 | orchestrator | 2025-06-03 16:13:35 - clean up ports 2025-06-03 16:13:36.151530 | orchestrator | 2025-06-03 16:13:36 - 006dfe64-d391-4161-8e55-2d5ee7f8403d 2025-06-03 16:13:36.433078 | orchestrator | 2025-06-03 16:13:36 - 22b06b07-515f-4a3c-9a5d-463f3edf6e25 2025-06-03 16:13:36.747134 | orchestrator | 2025-06-03 16:13:36 - 325fe94b-d859-4a11-b386-5b555df231af 2025-06-03 16:13:36.960849 | orchestrator | 2025-06-03 16:13:36 - 701db7b9-d6b8-4e6e-8de5-0004ab6cb10f 2025-06-03 16:13:37.189807 | orchestrator | 2025-06-03 16:13:37 - a7beec0e-ec5b-4f5a-bc05-d68e32b9f599 2025-06-03 16:13:37.566635 | orchestrator | 2025-06-03 16:13:37 - ae423bbc-8ed3-4b27-a48e-25001c7b8dfe 2025-06-03 16:13:37.788564 | orchestrator | 2025-06-03 16:13:37 - d74e4632-3cd8-4cf0-a5fb-8540ce5920fe 2025-06-03 16:13:38.011121 | orchestrator | 2025-06-03 16:13:38 - clean up volumes 2025-06-03 16:13:38.135993 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-3-node-base 2025-06-03 16:13:38.173305 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-1-node-base 2025-06-03 16:13:38.213314 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-4-node-base 2025-06-03 16:13:38.254762 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-0-node-base 2025-06-03 16:13:38.298118 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-5-node-base 2025-06-03 16:13:38.342424 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-2-node-base 2025-06-03 16:13:38.388202 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-manager-base 2025-06-03 16:13:38.428957 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-6-node-3 2025-06-03 16:13:38.472275 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-2-node-5 2025-06-03 16:13:38.516343 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-0-node-3 2025-06-03 16:13:38.560588 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-4-node-4 2025-06-03 16:13:38.603084 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-1-node-4 2025-06-03 16:13:38.645854 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-3-node-3 2025-06-03 16:13:38.688578 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-5-node-5 2025-06-03 16:13:38.730648 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-8-node-5 2025-06-03 16:13:38.774899 | orchestrator | 2025-06-03 16:13:38 - testbed-volume-7-node-4 2025-06-03 16:13:38.814589 | orchestrator | 2025-06-03 16:13:38 - disconnect routers 2025-06-03 16:13:38.950948 | orchestrator | 2025-06-03 16:13:38 - testbed 2025-06-03 16:13:39.968040 | orchestrator | 2025-06-03 16:13:39 - clean up subnets 2025-06-03 16:13:40.011837 | orchestrator | 2025-06-03 16:13:40 - subnet-testbed-management 2025-06-03 16:13:40.211833 | orchestrator | 2025-06-03 16:13:40 - clean up networks 2025-06-03 16:13:40.414615 | orchestrator | 2025-06-03 16:13:40 - net-testbed-management 2025-06-03 16:13:40.705663 | orchestrator | 2025-06-03 16:13:40 - clean up security groups 2025-06-03 16:13:40.751174 | orchestrator | 2025-06-03 16:13:40 - testbed-node 2025-06-03 16:13:40.868192 | orchestrator | 2025-06-03 16:13:40 - testbed-management 2025-06-03 16:13:40.987987 | orchestrator | 2025-06-03 16:13:40 - clean up floating ips 2025-06-03 16:13:41.021975 | orchestrator | 2025-06-03 16:13:41 - 81.163.193.73 2025-06-03 16:13:41.359180 | orchestrator | 2025-06-03 16:13:41 - clean up routers 2025-06-03 16:13:41.460861 | orchestrator | 2025-06-03 16:13:41 - testbed 2025-06-03 16:13:43.094158 | orchestrator | ok: Runtime: 0:00:24.844158 2025-06-03 16:13:43.096858 | 2025-06-03 16:13:43.096965 | PLAY RECAP 2025-06-03 16:13:43.097040 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-03 16:13:43.097078 | 2025-06-03 16:13:43.231772 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-03 16:13:43.232914 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-03 16:13:43.979821 | 2025-06-03 16:13:43.979986 | PLAY [Cleanup play] 2025-06-03 16:13:43.995649 | 2025-06-03 16:13:43.995780 | TASK [Set cloud fact (Zuul deployment)] 2025-06-03 16:13:44.062725 | orchestrator | ok 2025-06-03 16:13:44.071436 | 2025-06-03 16:13:44.071573 | TASK [Set cloud fact (local deployment)] 2025-06-03 16:13:44.105916 | orchestrator | skipping: Conditional result was False 2025-06-03 16:13:44.125088 | 2025-06-03 16:13:44.125217 | TASK [Clean the cloud environment] 2025-06-03 16:13:45.334843 | orchestrator | 2025-06-03 16:13:45 - clean up servers 2025-06-03 16:13:45.834889 | orchestrator | 2025-06-03 16:13:45 - clean up keypairs 2025-06-03 16:13:45.851957 | orchestrator | 2025-06-03 16:13:45 - wait for servers to be gone 2025-06-03 16:13:45.895616 | orchestrator | 2025-06-03 16:13:45 - clean up ports 2025-06-03 16:13:45.970217 | orchestrator | 2025-06-03 16:13:45 - clean up volumes 2025-06-03 16:13:46.049836 | orchestrator | 2025-06-03 16:13:46 - disconnect routers 2025-06-03 16:13:46.093452 | orchestrator | 2025-06-03 16:13:46 - clean up subnets 2025-06-03 16:13:46.119390 | orchestrator | 2025-06-03 16:13:46 - clean up networks 2025-06-03 16:13:46.274728 | orchestrator | 2025-06-03 16:13:46 - clean up security groups 2025-06-03 16:13:46.312335 | orchestrator | 2025-06-03 16:13:46 - clean up floating ips 2025-06-03 16:13:46.335605 | orchestrator | 2025-06-03 16:13:46 - clean up routers 2025-06-03 16:13:46.664503 | orchestrator | ok: Runtime: 0:00:01.420994 2025-06-03 16:13:46.668802 | 2025-06-03 16:13:46.668963 | PLAY RECAP 2025-06-03 16:13:46.669085 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-03 16:13:46.669147 | 2025-06-03 16:13:46.794670 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-03 16:13:46.795858 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-03 16:13:47.567729 | 2025-06-03 16:13:47.567895 | PLAY [Base post-fetch] 2025-06-03 16:13:47.584218 | 2025-06-03 16:13:47.584424 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-03 16:13:47.640745 | orchestrator | skipping: Conditional result was False 2025-06-03 16:13:47.656685 | 2025-06-03 16:13:47.656909 | TASK [fetch-output : Set log path for single node] 2025-06-03 16:13:47.709878 | orchestrator | ok 2025-06-03 16:13:47.721627 | 2025-06-03 16:13:47.721822 | LOOP [fetch-output : Ensure local output dirs] 2025-06-03 16:13:48.213704 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a7d7e7a961564eaa8d9118892ef2c194/work/logs" 2025-06-03 16:13:48.497664 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a7d7e7a961564eaa8d9118892ef2c194/work/artifacts" 2025-06-03 16:13:48.789737 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a7d7e7a961564eaa8d9118892ef2c194/work/docs" 2025-06-03 16:13:48.819797 | 2025-06-03 16:13:48.820040 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-03 16:13:49.800544 | orchestrator | changed: .d..t...... ./ 2025-06-03 16:13:49.800847 | orchestrator | changed: All items complete 2025-06-03 16:13:49.800892 | 2025-06-03 16:13:50.528873 | orchestrator | changed: .d..t...... ./ 2025-06-03 16:13:51.312258 | orchestrator | changed: .d..t...... ./ 2025-06-03 16:13:51.349251 | 2025-06-03 16:13:51.349517 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-03 16:13:51.391343 | orchestrator | skipping: Conditional result was False 2025-06-03 16:13:51.393547 | orchestrator | skipping: Conditional result was False 2025-06-03 16:13:51.409227 | 2025-06-03 16:13:51.409349 | PLAY RECAP 2025-06-03 16:13:51.409423 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-03 16:13:51.409460 | 2025-06-03 16:13:51.542630 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-03 16:13:51.543794 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-03 16:13:52.312180 | 2025-06-03 16:13:52.312383 | PLAY [Base post] 2025-06-03 16:13:52.326929 | 2025-06-03 16:13:52.327068 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-03 16:13:53.760746 | orchestrator | changed 2025-06-03 16:13:53.770967 | 2025-06-03 16:13:53.771101 | PLAY RECAP 2025-06-03 16:13:53.771179 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-03 16:13:53.771253 | 2025-06-03 16:13:53.892072 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-03 16:13:53.894865 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-03 16:13:54.703057 | 2025-06-03 16:13:54.703230 | PLAY [Base post-logs] 2025-06-03 16:13:54.713862 | 2025-06-03 16:13:54.713996 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-03 16:13:55.200242 | localhost | changed 2025-06-03 16:13:55.218802 | 2025-06-03 16:13:55.219032 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-03 16:13:55.247699 | localhost | ok 2025-06-03 16:13:55.255359 | 2025-06-03 16:13:55.255541 | TASK [Set zuul-log-path fact] 2025-06-03 16:13:55.274342 | localhost | ok 2025-06-03 16:13:55.290586 | 2025-06-03 16:13:55.290745 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-03 16:13:55.329115 | localhost | ok 2025-06-03 16:13:55.335487 | 2025-06-03 16:13:55.335645 | TASK [upload-logs : Create log directories] 2025-06-03 16:13:55.872477 | localhost | changed 2025-06-03 16:13:55.875527 | 2025-06-03 16:13:55.875638 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-03 16:13:56.413762 | localhost -> localhost | ok: Runtime: 0:00:00.006994 2025-06-03 16:13:56.423504 | 2025-06-03 16:13:56.423725 | TASK [upload-logs : Upload logs to log server] 2025-06-03 16:13:56.996253 | localhost | Output suppressed because no_log was given 2025-06-03 16:13:57.000344 | 2025-06-03 16:13:57.000551 | LOOP [upload-logs : Compress console log and json output] 2025-06-03 16:13:57.062216 | localhost | skipping: Conditional result was False 2025-06-03 16:13:57.067436 | localhost | skipping: Conditional result was False 2025-06-03 16:13:57.082098 | 2025-06-03 16:13:57.082257 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-03 16:13:57.149656 | localhost | skipping: Conditional result was False 2025-06-03 16:13:57.150399 | 2025-06-03 16:13:57.155423 | localhost | skipping: Conditional result was False 2025-06-03 16:13:57.163201 | 2025-06-03 16:13:57.163347 | LOOP [upload-logs : Upload console log and json output]